Full doc title: “The AI Doc: Or How I Became an Apocaloptimist”

Per wiki:

The AI Doc: Or How I Became an Apocaloptimist is a 2026 American documentary film directed by Daniel Roher and Charlie Tyrell. It is produced by the Academy Award-winning teams behind Everything Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker).

What to say here? This is a doc being produced by the producer and one of the directors of Everything Everywhere All At Once, who notably have been making efforts to, uh, negotiate? I guess? with AI companies vis a vis making movies. Anyway the title is a piece of shit and this trailer makes it look like this is just critihype the movie. I guess we’ll hear more about it in the coming month.

Really interesting framing this as brought about by thinking about the director’s child, given Yud’s recent comments about how one should raise a daughter if you had certain beliefs about AI.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 hours ago

    Emily M. Bender says its framed around doomers, boosters, “a third way,” and CEOs, “The fact that many of his interviewees are singing from the same hymnal is underscored by edits wherein he splices together clips from multiple (3? more?) speakers into one sentence.”

    I have mixed feelings about her pop book.

  • lurker@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    I poked around the IMDB page, and there are reviews! currently it’s sitting at a 8.5/10 with 31 ratings (though no written reviews it seems like) the metacritic score is a 51/100 with 4 reviews and there are 4 external reviews

    • swlabr@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I read a review (the one hosted on the ebert site) and it seems like this just falls into one of the patterns we’ve already seen when other people not steeped in the X risk miasma engage with it. As in, what should be a documentary about how the AI industry is a bubble and that all the AI ceos are grifters or deluded or both, is instead a “somehow I managed to fall for Yud’s whole thing and am now spreading the word” type deal. Big sigh!

  • dovel@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Strange that Big Yud is missing from the cast on IMDB, but this could be a simple oversight since the movie is not fully out yet. Unfortunately, the rest of the cast contains a lot of familiar faces to put it mildly.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Sam Altman and the other CEOS being there is such a joke “this technology is so dangerous guys! of course I’m gonna keep blocking regulation for it, I need to make money after all!” Also, I’m shocked Emily Bender and Timmit Gebru are there, aren’t they AI skeptics?

      • 🆘Bill Cole 🇺🇦@toad.social
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        @lurker I don’t know that I’d call them skeptics universally, they are experts in the AI field who are EXTREMELY skeptical of the TESCREAL complex and of the *hype* of the current fad LLM and image generation tools.

        Whatever you call them, it’s *positive* that a documentary includes conflicting viewpoints, from the people who see them. The plausible range of near-term AI developments is smaller than the range of widely-held expectations. A documentary has to address the crazies & the skeptics

        • lurker@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          I took a deeper look into the documentary, and it does go into both the pessimist and optimist perspectives, so their inclusion makes more sense. and yeah, I was trying to get at how they’re skeptical of the TESCREAL stuff and of current LLM capabilities

  • lurker@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    my god I just cringed so hard. I thought the book would be the end….

    Also yeah, someone pointed this out on old SneerClub but Yud loves using kids to illustrate his AI fears, and to beat a very dead horse here that’s a weird thing to do in his case

    If anyone here wants to jump on the grenade and watch it/acquire a transcript for the rest of us to sneer at you’ll be my hero

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      also, what the fuck does “apocaloptimist” mean??? does it mean he’s optimistic about our chances of apocalypse??? (which makes no sense, just say pessimist) has he finally gone crazy and is now saying that apocalypse is the optimistic outcome?

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        I mean, they mostly don’t have a problem with AI instances inheriting the earth as long as they’re sufficiently rationalist.

      • swlabr@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Pure speculation: my guess is that an “apocaloptimist” is just someone fully bought into all of the rationalist AI delulu. Specifically:

        • AGI is possible
        • AGI will solve all our current problems
        • A future where AGI ends humanity is possible/probable

        and they take the extra belief, steeped in the grand tradition of liberal optimism, that we will solve the alignment problem and everything will be ok. Again, just guessing here.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          2 days ago

          According to a site : https://apocaloptimist.net/the-apocaloptimist/

          “An Apocaloptimist sees the trouble, but is optimistic we can do anything–including fixing all the world’s problems”

          So if jesus wins the war during the second coming all problems are fixed.

          (The thing is also nuts “we are the people actually working on fixing things [by hoping AGI will fix it all for us]”, my brother in Eschatology you are running a podcast sorry the guy is unrelated to the agi people theyvare just using his term).

          E: does seem the site itself isnt about AI so they just stole this guys term. Nope they just took this clean energy guys term, sorry about sneering at him he seems to actually want to introduce clean energy and works hard (that seems to be a lot of conventions and blogging however, so buying ourself out of the capitalist problems) for it, as far as I can tell.

        • lurker@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          my personal guess is that “apocaloptimist” is just them trying to make a “better” term for “pessimist”