In similar case, US National Eating Disorder Association laid off entire helpline staff. Soon after, chatbot disabled for giving out harmful information.

  • Semi-Hemi-Demigod@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    The best analogy I’ve heard for LLMs is that they’re word calculators. You can give it a prompt or ask a question and it will spit out an answer. But, like a calculator, if you don’t know what you’re doing you won’t know the answer is wrong.

    I’ve found it really useful for learning new technologies like Terraform and Ansible, because it takes a lot less time than reading documentation or StackOverflow threads. But you need a way to independently verify the responses before it can be truly useful.