And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.
But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.
you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.
asdf
why do you think hallucinating autocomplete can make rules-based decisions reliably
why do you think this is simple
sadf
good, use your excel spreadsheet and not a tool that fucking sucks at it
You should not need an AI to do that if it’s not a freeform text input?
asdf
you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.
the post history is very infosec dot pub
a terrible place for both information and security
and a terrible pub
Jury’s still out on the Damage Over Time effect.
citation/link/reference, please
“AI” in the context of the article is “LLMs”. So, the definition of not trustworthy.