A week and a half ago, Goldman Sachs put out a 31-page-report (titled "Gen AI: Too Much Spend, Too Little Benefit?”) that includes some of the most damning literature on generative AI I’ve ever seen.

The report includes an interview with economist Daron Acemoglu of MIT (page 4), an Institute Professor who published a paper back in May called “The Simple Macroeconomics of AI” that argued that “the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters expect.” A month has only made Acemoglu more pessimistic, declaring that “truly transformative changes won’t happen quickly and few – if any – will likely occur within the next 10 years,” and that generative AI’s ability to affect global productivity is low because “many of the tasks that humans currently perform…are multi-faceted and require real-world interaction, which AI won’t be able to materially improve anytime soon.”

What makes this interview – and really, this paper — so remarkable is how thoroughly and aggressively it attacks every bit of marketing collateral the AI movement has. Acemoglu specifically questions the belief that AI models will simply get more powerful as we throw more data and GPU capacity at them, and specifically ask a question: what does it mean to “double AI’s capabilities”? How does that actually make something like, say, a customer service rep better?

While Acemoglu has some positive things to say — for example, that AI models could be trained to help scientists conceive of and test new materials (which happened last year) — his general verdict is quite harsh: that using generative AI and “too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides.” In essence, replacing humans with AI might break everything if you’re one of those bosses that doesn’t actually know what the fuck it is they’re talking about.

every commentator (both pro and ai-sceptic) seems to not be aware of science in protein designs and docking, where ml is actually doing fantastic things, never before done level of stuff, and can conceivably do drug design much faster (the issue there is, once its done, you don’t need to reinvent protein chain making a drug/compound for pennies). For drug design revenues however - the cost of design pales in comparison to clinical trials (10-100 mlns compared to 1-3 billions)

Covello believes that the combined expenditure of all parts of the generative AI boom — data centers, utilities and applications — will cost a trillion dollars in the next several years alone, and asks one very simple question: “what trillion dollar problem will AI solve?” He notes that “replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions [he’s] witnessed in the last thirty years.”

In plain English: generative AI isn’t making any money for anybody because it doesn’t actually make companies that use it any extra money. Efficiency is useful, but it is not company-defining. He also adds that hyperscalers like Google and Microsoft will “also garner incremental revenue” from AI — not the huge returns they’re perhaps counting on, given their vast AI-related expenditure over the past two years.

  • Kereru [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    I quite like Ed’s writing for a cathartic rant against the stupidity of AI.

    Has anyone got any reading recommendations on the LLM insanity from a marxist perspective though? Assuming AI can replace labour in some industries, it immediately comes up against the LTV, with the value of the output immediately going to almost zero. Companies therefore have to maintain monopolistic false scarcity, which of course tech companies are already trying to do, but it seems to have wider implications for the economy - technofeudalism I guess.