Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.

LINKS:

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 hours ago

    They hit upon a strong point with comparing chatbots to talking with a psychopath (about 56-58 minutes in.) Discouraging someone from talking to other people is a classic method of increasing one’s control over someone else.

    It bears repeating that the chatbots’ sycophantic nature isn’t in order to help you, but rather for their own (owners’) goal - that is, to keep you coming back to it. It’s quite like grooming, if you think about it. With the current end-goal of getting users addicted.

    The future end goals? Still to be determined. If enshittification has taught us anything, it should be that any technology (in the current framework, at least) that gains significant adoption can and will eventually be used to exploit its users.

    • DriftingLynx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      Ah but this tech is ahead of the expoiting-their-users-curve.

      By using them now you’re opening yourself to psychosis, yes, but also your conversations are being used to further train the models. I do agree we can assume we’re at the high point and these tools are on the same downward slide as all big tech projects. It’s going to happen quickly considering the mind boggling levels of debt they are carrying.

      • CubitOomOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        52 minutes ago

        They aren’t about to squander having insights into the deepest recesses of their most loyal users.

        It’s an advertising wet dream.

    • amino@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Discouraging someone from talking to other people is a classic method of increasing one’s control over someone else.

      you’ve just described what the average parent, teacher, priest, doctor, insert authority figure tells children to do if they wanna “stay safe”. where AI comes in is automating said preexisting systems of domination to hide the underlying social harms and naturalize child abuse. “the AI isn’t a person therefore it can’t groom my kids”.

      I’d argue that when the majority of adults engage in abuse, that behavior can’t be called psychopathic because that shifts the blame from abolishing childism to people with personality disorders.

  • rafoix@lemmy.zip
    link
    fedilink
    arrow-up
    22
    ·
    18 hours ago

    Seems like AI regulation is becoming more necessary by the minute.

    Ban it in schools. Ban it for children. We’re letting billionaires destroy a whole generation of children.

    • CubitOomOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      17 hours ago
      1. Gen AI should be proven safe before deployment
      2. Gen AI needs to be opt in by default
      3. Model/agent must transparently show what they are doing
      4. Gen AI should not be anthropomorphized sycophants designed to make the users stay in the chat and be isolated from other humans
      5. Gen AI profiteers should be held accountable
    • KelvarCherry [They/Them]@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      18 hours ago

      Removing AI LLM slop from schools is absolutely the first step, and would be impactful at that. These kids are being forced onto AI through their schools. If we win this battle, one day we’ll look back at these “historical figure” chatbots the same way we think of candy cigarettes.

      I want to iterate that school curriculum is incredibly controllable at the local level. Check your school board. Rally for a No AI policy in lessons, and perhaps in teaching materials. This at least is one step we can take.

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago

    Do you guys think some forms or applications of AI will eventually be outright banned, in a similar fashion as mercury, cocaine and heroin were in the beginning used as medicines for all sorts of ailments and later were removed?

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      Not really, or it will take a long time. We already struggle to do this with social media, knowing full well that there’s issues since 2010-2016. And politicians/governments still use twitter unconsensual porn generator platform for communicating with the public