Didn’t want to further derail the exploding heads vote thread, so:

What are the criteria that should be applied when determining whether to defederate from an instance? And should there be a specific process to be followed, and what level of communication if any with the instance admins?

For context it may be useful to look at the history of the Fediblock tag in Mastodon, to see what sorts of stuff folks are dealing with historically in terms of both obvious and unremarkable bad actors (e.g., spam) and conflict over acceptability of types of speech and moderation standards.

(Not saying that folks need to embrace similar standards or practices, but it’s useful to know what’s been going on all this time, especially for folks who are new to the fediverse.)

For example:

  • Presence of posts that violate this instance’s “no bigotry” rule (Does it matter how prolific this type of content is on the target instance?)
  • Instance rules that conflict with this instance’s rules directly - if this instance blocks hate speech and the other instance explicitly allows it, for example.
  • Admin non-response or unsatisfactory response to reported posts which violate community rules
    • Not sure if there’s a way in lemmy to track incoming/outgoing reports, but it would be useful for the community to have some idea here. NOT saying to expose the content of all reports, just an idea of volume.
  • High volume of bad faith reports from the target instance on users here (e.g., if someone talks about racism here and a hostile instance reports it for “white genocide” or some other bs). This may seem obscure, but it’s a real issue on Mastodon.
  • Edited to add: Hosting communities whose stated purpose is to share content bigoted content
  • Coordinating trolling, harassment, etc.

For reference, local rules:

Be respectful. Everyone should feel welcome here.

No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.

No Ads / Spamming.

No pornography.

  • tcely@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    High volume of bad faith reports from the target instance on users here (e.g., if someone talks about racism here and a hostile instance reports it for “white genocide” or some other bs). This may seem obscure, but it’s a real issue on Mastodon.

    There is no way we should defederate an instance because of this. Particularly, as we know reports will grow as the number of users does over time anyway.

    Breaking the users’ experience because your tooling is insufficient is a bad look.

    • Oni_eyes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Throwing up your hands and saying “oh well, you’re gonna have to personally deal with all the trolls because we don’t want to hurt their feelings to keep our rules in place” isn’t better.

      When the tools are created, they can refederate and use the tools as needed.

      • tcely@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Bad faith reports don’t imply actual trolls for users to personally deal with.

        Performing moderation actions on good faith reports from users is desirable.

        Disconnecting your own users from content they find useful because of the volume of reports that they can’t see or prevent, just because you can’t be bothered to do the moderation work is undesirable.

        • Oni_eyes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Who decides that the majority are bad faith reports?

          Users that want access to that content can, as mentioned a hundred times every time defederating comes up, go migrate or make a second account.

          The fact is that there are not a lot of tools for mods right now so it’s either: A) keep federated and let each individual user block trolls, Or B) defederating until such mod tools are available, which is something that is apparently being worked on. Considering many posters shit on beehaw for defederating when their community is predominantly of a group that receives intense trolling and has a notably higher suicide rate than baseline, and that online harassment is a contributing factor to that level, I don’t understand why there is such a pushback until such a time as said tools are available in order to protect the larger community.

          But I guess some people who do not have to bear that weight don’t appreciate it, and a full throated defenders of free speech and “just asking questions” despite how that has worked out historically as enabling trolls at all levels.

          In addition, these instances are growing fast and it will be difficult for mods to keep up with their duties even with a full suite of tools. Defederating is just a way to cool things off while assessing the damage vs potential and putting the most vulnerable first over users who don’t personally care that they see said content.