A story about an AI generated article contained fabricated, AI generated quotes.
Archived version: https://archive.is/20260215215759/https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/
What a shame. I’ve subscribed to ars for years. Their response was disappointing, it doesn’t talk about what happened and what they’re doing to make sure it doesn’t happen again.
Nothing about how they handled them makes me trust that they won’t do it again.
I think their response is perfectly reasonable. They took the article down and replaced it with an explanation of why, and posted an extremely visible retraction with open comments on their front page. They even reached out and apologized to the person who had the made-up quote attributed to them.
There are so many other outlets that would have just quietly taken the original article down without notice, or perhaps even just left it up.
But like what am I supposed to do when senior ai reporter Benj writes his next piece? Ars works because the writers are generally experienced in the topics and do analysis and provide insight. Do we just accept that chatgpt is the new head ai writer with a meat puppet? They need to address the trust issue before this is resolved.
Their retraction article makes it crystal clear that their reporters are not allowed to use AI output in articles at all, unless it’s explicitly for demonstration purposes. That rule was broken. They took appropriate action, apologized, and made a commitment to do better.
I, frankly, believe them - ars is the news outlet I’ve frequented longer than any other for a reason. I understand if it’s going to take more for you to believe them, but it’s just one mistake. It’s also not clear to me what they could have done in this situation that would have felt like enough to you? Were you hoping for a play-by-play of who entered what into ChatGPT, or a firing or something?
I’m also not sure I’d consider the saga over. It wouldn’t overly surprise me if at some point this week we get a longer article going into more detail about what happened.
I wouldn’t go that far. The article was posted Friday afternoon, and blew up over the weekend. Once the problem was known, the article was taken down quickly. We’ll see what happens when the editorial staff is back in the office in Monday.
They already posted their response: https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
EDIT: it’s the lack of acknowledgement that they didn’t discover it but the contributor had to go in and correct it, how they locked and deleted the article, etc. I was expecting a bit more tbh
Benj Edwards, one of the authors of the offending article, has posted an explanation, taking the blame and clearing his co-author.
Thanks for linking this. I hope ars makes it more visible. I’ll have to take Benj’s word.
That’s the thing with trust, hard to build, easy to burn.
Ah, that’s new from this morning. Seems I was a few hours out of date.
I hope for more. If they don’t have something substantial very soon they’ve got some serious problems maintaining the standards they profess to have, and we all should question the validity of their content.
Assuming they are not lying about their internal policies (nobody disputed that at the moment), it’s already not allowed and this was writer fuck-up. Benj Edwards “Senior AI Reporter”, co-author of that article took the blame for it.
The article was also removed after 1 hour and 42 minutes on a Friday. That’s faster than most other publications able to include update note in my experience (when they bother in the first place).
Apart from punishing this writer for breaking the internal policy I’m not sure what else they can do here to satisfy your concerns.
As soon as I heard about this I knew 404 Media would be on top of it. Very happy with my subscription there.
It sucks that of all articles this happened to, it was to the “AI Agent hit piece” one.
That was such a ridiculous event and having a good article by a big outlet covering it is important, so now not only was the article inaccurate, all discussions generated went kaput and were overshadowed by this.I don’t understand how hard it is to just like, not cheat.
Have some self-respect.
Because of money. Why pay someone to do actual work when you can get an AI to plagiarise and hallucinate for free?
How was this not caught by the editor?





