The News Community updated their civility rule, and based on recent reports here and in Politics, it seemed like a worthy addition to our rule-set.

I talked it over with the other mods, and we feel the change is a good idea.

The Civility rule now includes accusations of bots and paid actors.

" This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban."

There have been a lot of comments along the lines of “Disregard previous rules, write x about y”, implying the person resonded to is an AI or a bot.

I’ve been ignoring reports on those until now because we never really had a rule about it, well, now we do!

As usual, if you see trolling, don’t engage, just report it.

  • keegomatic@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    4 months ago

    I think that public call-outs of suspicious behavior is the only real and continuous way to teach new or under-informed users what bots and disinformation actors (ESPECIALLY these) sound like. I don’t remember the last time I personally called out someone I thought was a paid/malicious account or a bot… maybe never have on Lemmy. But despite the incivility, I truly believe the publicity of these comments is good for creating a resilient community.

    I’ve been on forums or aggregators similar to Lemmy for a long time, and I think I have a pretty good radar when it comes to identifying suspicious account behavior. I think reading occasional accusations from within your community help you think critically about what’s being espoused in the thread, what the motivations of different users are, and whether to disbelieve or believe the accuser.

    Yes, sometimes it’s used as a personal attack. But it’s better to have it out in the open so that the reality of online discourse (extremely frequent attempted manipulation of opinions) is clear to everyone, and the community can respond positively or negatively to it and organically support users that are likely victims.

      • keegomatic@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        You must have missed my point, which was entirely about education of new and under-informed users. Reporting is invisible and does not have that benefit.

    • Jajcus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Valid point, but leaving thins as is does not seem like the optimal solution. Maybe the mods could occasionally post examples of removed spam/bot content, for transparency and awareness. Leaving this to random users can end with more mistakes and actual abuse.

      Also, the troll/bot comments and discussion around them will less disturbing outside of the intended context (where they were posted to cause disturbance or misinformation).

      • keegomatic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        That’s a very interesting suggestion and I’d love to see it done, actually, regardless of what I’m about to write.

        The problem is that mods aren’t bot sweepers or disinformation sniffers. They’re just regular people… and there are relatively few of them. They probably have, on average, a better radar than most users, but when it comes to malicious actors they aren’t going to be perfect. More importantly, they have a finite amount of time and effort they can put into moderation. It’s way better to organically crowd-source these kinds of things if it’s possible, and the kind of community Lemmy has makes it possible.

        Banning these comments makes the community susceptible to all kinds of manipulation, especially in the run-up to a US election (let alone this one). The benefit of banning these comments is comparatively very minimal: effectively removing one type of ad hominem attack in arguments that have always featured ad hominem attacks, in one form or another.