When a machine moderates content, it evaluates text and images as data using an algorithm that has been trained on existing data sets. The process for selecting training data has come under fire as it’s been shown to have racial, gender and other biases.

  • المنطقة عكف عفريت@lemmy.world
    link
    fedilink
    English
    arrow-up
    78
    ·
    edit-2
    10 months ago

    The problem isn’t AI itself, it’s that comapies are willing to do that and then fire any customer support or human you could ever talk to. They let their automod ruin people’s lives and accounts, then barricade themselves, impossible to reach.

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      10 months ago

      Why let a company be in a position to ruin ur life in the first place? It’s like putting ur balls in a Crocs mouth.

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        10 months ago

        Well, it’s kinda what society forces us to do. There’s not exactly much of an alternative to interacting with companies for most things.

        Whether it’s you livelihood, you home, your car, insurance, whatever

    • Terrasque
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Exactly. The real problem is lack of human oversight, and lack of a way to contact someone.

      And these days, even if you manage to get someone they’ll be some call center in India or Philippines that are only there to help with faq-level things and otherwise politely tell you to fuck off. They can’t actually do anything.