Artificial intelligence-powered writing assistants that autocomplete sentences or offer “smart replies” not only put words into people’s mouths, they also put ideas into their heads, according to new research.
I bet it’s more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don’t offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it’s probably not a straight-forward copy, why not just accept?
I’ve just read the abstract of the study - but it doesn’t seem to be about people mindlessly copying the AI and producing biased text as a result. Rather, it’s about people seeing the points the AI makes, thinking “Good point!” and adjusting their own opinion accordingly.
So it looks to me like it’s just the effect of where done view points get more exposure.
Those seem like questions for more research.
I bet it’s more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don’t offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it’s probably not a straight-forward copy, why not just accept?
I’ve just read the abstract of the study - but it doesn’t seem to be about people mindlessly copying the AI and producing biased text as a result. Rather, it’s about people seeing the points the AI makes, thinking “Good point!” and adjusting their own opinion accordingly.
So it looks to me like it’s just the effect of where done view points get more exposure.