• Conradfart@lemmy.ca
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    8 months ago

    They are useful when you need to generate quasi meaningful bullshit in large volumes easily.

    LLMs are being used in medicine now, not to help with diagnosis or correlate seemingly unrelated health data, but to write responses to complaint letters or generate reflective portfolio entries for appraisal.

    Don’t get me wrong, outsourcing the bullshit and waffle in medicine is still a win, it frees up time and energy for the highly trained organic general intelligences to do what they do best. I just don’t think it’s the exact outcome the industry expected.

    • huginn@feddit.it
      cake
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      That’s kinda the point of my above comment: they’re useful for bullshit: that’s why they’ll never be trustworthy