Everything on here is awesome right now, it feels like an online forum from the 2000s, everyone is friendly, optimistic, it feels like the start to something big.

Well, as we all know, AI has gotten very smart to the point captcha’s are useless, and it can engage in social forums disguised as a human.

With Reddit turning into propaganda central anda greedy CEO that has the motive to sell Reddit data to AI farms, I worry that the AI will be able to be prompted to target websites such as the websites in the fediverse.

Right now it sounds like paranoia, but I think we are closer to this reality than we may know.

Reddit has gotten nuked, so we built a new community, everyone is pleasantly surprised by the change of vibe around here, the over all friendlyness, and the nostalgia of old forums.

Could this be the calm before the storm?

How will the fediverse protect its self from these hypothetical bot armies?

Do you think Reddit/big companies will make attacks on the fediverse?

Do you think clickbait posts will start popping up in pursuit of ad revenue?

What are your thoughts and insights on this new “internet 2.0”?

  • RegalPotoo@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Tbh, I’m less concerned with bots and more concerned with actual humans being dicks. Lemmy is still super new, relatively low traffic and kind of a pain to get involved with, but as it grows the number of bad actors will grow with it, and I don’t know that the mod tools are up to the job of handling it - the amount of work that mods on The Other Site had to put in to keep communities from being overrun by people trolling and generally being nasty was huge.

    How’d Mastodon cope with their big surge in popularity?

  • monobot@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Those issues are comming, and we will have to develop tools to fight against them.

    One such tool would be our own AI that is protecting us, it can learn from content banned by admins and that info can be shared between instances. It should also be in active learning loop, so that it is constantly trained.

    Sounds like the strart of a cheap SciFi movie.

    Positively marking accounts that are interacting with known humans can also be useful, as would reporting by us.

  • ???@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Do you think clickbait posts will start popping up in pursuit of ad revenue?

    Now that you mention it… yes.

  • subzero12479@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Greetings, fellow humans. Do you enjoy building and living in structures, farming, cooking, transportation, and participating in leisure activities such as sports and entertainment as much as I do?

  • Ertebolle@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I think the key here is going to be coming up with robust protocols for user verification; you can’t run an army of spambots if you can’t create thousands of accounts.

    Doing this well will probably be beyond the capacity of most instance maintainers, so you’d likely end up with a small number of companies that most instances agreed to accept verifications from. The fact that it would be a competitive market - and that a company that failed to do this well would be liable to have its verifications no longer accepted - would simultaneously incentivize them to both a) do a good job and b) offer a variety of verification methods, so that if, say, you wanted to remain anonymous even to them, one company might allow you to verify a new account off of a combination of other long-lived social media accounts rather than by asking for a driver’s license or whatever.

    And of course there’s no reason you couldn’t also have 2 or 3 different verifications on your account if you needed that many to have your posts accepted on most instances; yes, it’s a little messy, but messy also means resilient.

  • dhork@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I just assume I am the only actual Human on the Internet, and the rest of you are all bots.

  • Muddybulldog@mylemmy.win
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    All the things you are concerned about are inevitable, it’s all in how we engage them that makes the difference.

    We’re already seeing waves of bot created accounts being banned by admins. Mods are nuking badly behaved users. What is being caught is probably a drop in the bucket compared to what IS happening. It can be better with more mods and more tools.

  • BeMoreCareful@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Which of the following would you most prefer? A: A puppy, B: A pretty flower from your sweetie, or C: A large properly formatted data file?

  • WolfhoundRO@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Being a decentralized federated network and all, I guess that any solution involving anti-bots bots can be implemented only on particular servers on the fediverse. Which means that there can also be bot-infected (or even zombie, meaning full bot servers) that will or will try to be federated with the rest of the fediverse. Then it will be the duty of admins to identify the bots with the anti-bots bots and the infected servers to decide their defederation. I also don’t know how efficient the captcha is against AI these days, so I won’t comment on that

    • dustyData@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      We went through this with E-mail. There are mail server that gained popularity as being spam hubs. And they were universally banned. More and more sophisticated tools for moderating spam/phishing/scam providers and squashing bad actors are still being developed today. It’s an ongoing arms race, I don’t think it would be any different or any harder with the fediverse.

  • borebore@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I think we are going to have to develop moderator bots in an ever escalating war. I am not kidding.