An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

  • SinningStromgald@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    But they know the AI’s have these biases, at least now, shouldn’t they be able to code them out or lessen them? Or would that just create more problems?

    Sorry, I’m no programer so I have no idea if thats even possible or not. Just sounds possible in my head.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      1 year ago

      You don’t really program them, they learn from the data provided. If say you want a model that generates faces, and you provide it with say, 500 faces, 470 of which are of black women, when you ask it to generate a face, it’ll most likely generate a face of a black woman.

      The models are essentially maps of probability, you give it a prompt, and ask it what the most likely output is given said prompt.

      If she had used a model trained to generate pornography, it would’ve likely given her something more pornographic, if not outright explicit.


      You’ve also kind of touched on a point of problem with large language models; they’re not programmed, but rather prompted.

      When it comes to Bing Chat, Chat GPT and others, they have additional AI agents sitting alongside them to help filter/mark out problematic content both provided by the user, as well as possible problematic content the LLM itself generates. Like this prompt, the model marked my content as problematic and the bot gives me a canned response, “Hi, I’m bing. Sorry, can’t help you with this. Have a nice day. :)”

      These filters are very crude, but are necessary because of problems inherent in the source data the model was trained on. See, if you crawl the internet for data to train it on, you’re bound to bump into all sorts of good information; Wikipedia articles, Q&A forums, recipe blogs, personal blogs, fanfiction sites, etc. Enough of this data will give you a well rounded model capable of generating believable content across a wide range of topics. However, you can’t feasibly filter the entire internet, among all of this you’ll find hate speech, you’ll find blogs run by neo nazis and conspiracy theorists, you’ll find blogs where people talk about their depression, suicide notes, misogyny, racism, and all sorts of depressing, disgusting, evil, and dark aspects of humanity.

      Thus there’s no code you can change to fix racism.

      if (bot.response == racist) 
      {
          dont();
      }
      

      But rather simple measures that read the user/agent interaction, filtering it for possible bad words, or likely using another AI model to gauge the probability of an interaction being negative,

      if (interaction.weightedResult < negative)
      {
          return "I'm sorry, but I can't help you with this at the moment. I'm still learning though. Try asking me something else instead! 😊";
      }
      

      As an aside, if she’d prompted “professional Asian woman” it likely would’ve done a better job. Depending on how much “creative license” she gives the model though, it still won’t give her her own face back. I get the idea of what she’s trying to do, and there’s certainly ways of acheiving it, but she likely wasn’t using a product/model weighted to do specifically the thing she was asking to do.


      Edit

      Just as a test, because I myself got curious; I had Stable Diffusion generate 20 images given the prompt

      professional person dressed in business attire, smiling

      20 sampling steps, using DPM++ 2M SDE Karras, and the v1-5-pruned-emaonly Stable Diffusion model.

      Here’s the result

      I changed the prompt to

      professional person dressed in business attire, smiling, [diverse, diversity]

      And here is the result

      The models can generate non-white men, but it is in a way just a reflection of our society. White men are the default. Likewise if you prompt it for “loving couple” there’ll be so many images of straight couples. But don’t just take my word for it, here’s an example.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          It can do faces quite well on second passes but struggles hard with hands.

          Corporate photography tends to be uncanny and creepy to begin with, so using an AI to generate it made it even more so.

          I totally didn’t just spend 30 minutes generating corporate stock photos and laughing at the creepy results. 😅

      • Buttons@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Indeed, there seems to be some confusion about the wording too. She wrote instructions, like she was instructing a state-of-the-art LLM, “please alter this photo to make it look professional”, but the AI can’t understand sentence structure or instructions, it just looks for labels that match pictures. So the AI sees “photo, professional” and it sees her starting photo, and it alters the starting photo to produce something that resembles “photo, professional”. It doesn’t know what those other words mean.

    • CharlestonChewbacca@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      That’s not how it works. You don’t just “program out the biases” you have to retain the model with more inclusive training data.

    • HobbitFoot @thelemmy.club
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      If you can code it, it isn’t really AI.

      AI is able to make the connections when given the data by itself. The problem is that the data required is usually enormous, so the quantity of data is more valued than the quality.

    • Tgs91@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Shouldn’t they be able to code them out?

      You can’t “code them out” because AI isn’t using a simple script like traditional software. They are giant nested statistical models that learn from data. It learns to read the data it was trained on. It learns to understand images that it was trained on, and how they relate to text. You can’t tell it “in this situation, don’t consider race” because the situation itself is not coded anywhere. It’s just learned behaviors from the training data.

      Shouldn’t they be able to lessen them?

      For this one the answer is YES. And they DO lessen them as much as they can. But they’re training on data scraped from many sources. You can try to curate the data to remove racism/sexism, but there’s no easy way to remove bias from data that is so open ended. There is no way to do this in an automated way besides using an AI model, and for that, you need to already have a model that understands race/gender/etc bias, which doesn’t really exist. You can have humans go through the data to try to remove bias, but that introduces a ton of problems as well. Many humans would disagree on what is biased. And human labelers also have a shockingly high error rate. People are flat out bad at repetitive tasks.

      And even that only covers data that actively contains bigotry. In most of these generative AI cases, the real issue is just a lack of data or imbalanced data from the internet. For this specific article, the user asked to make a photo look professional. Training data where photos were clearly a professional setting probably came from sites like LinkedIn, which had a disproportionate number of white users. These models also have a better understanding of English than other languages because there is so much more training data available in English. So asian professional sites may exist in the training data, but the model didn’t understand the language as well, so it’s not as confident about professional images of Asians.

      So you can address this by curating the training data. But this is just ONE of THOUSANDS and THOUSANDS of biases, and it’s not possible to control all of them in the data. Often if you try to correct one bias, it accidentally causes the model to perform even worse on other biases.

      They do their best. But ultimately these are statistical models that reflect the existing data on the internet. As long as the internet contains bias, so will AI

    • Dale@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      It’s possible sure. In order to train these image AIs you essentially feed them a massive amount of pictures as “training data.” These biases happen because more often than not the training data used is mostly pictures of white people. This might be due to racial bias of the creators, or a more CRT explanation where they only had the rights to pictures of mostly white people. Either way, the fix is to train the AI on more diverse faces.

    • ∟⊔⊤∦∣≶@lemmy.nz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There are LoRAs available (hundreds, maybe thousands) to tweak the base model so you can generate exactly what you want. So, problem solved for quite a while now.