‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree::A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.
Well, would you look at that, it’s playing out exactly the same as every other technological advancement ever. Instead of using it to reduce employee workloads and maintain an equilibrium of output, it exploits them by brute-forcing increased productivity with no changes to compensation and the capitalists hoard even more of the profits for themselves.
I mean, did you read the article?
The context of that quote was about people using AI to write shitty stories and then submit them for review by humans. They weren’t complaining about AI that was supposed to help them at work, being used to hurt them at work…
In fact, the entire rest of the article is just one long anecdotal story from a single Union leader for a very specific (though broadly represented) trade group.
There’s almost nothing of substance here and I’m shocked your comment Is so highly upvoted.
Exactly what I keep saying when people start blaming the tools being used for automation. Productivity is up and up and up, but none of that has been given back to the workers in the past fifty years. If I try to find dialogue on that issue, I run into a mountain of blatant propaganda defending the continued robbery of the middle and lower classes.
Temporarily embarrassed millionaires will lick the boots of capitalism in the naive hope of pulling themselves up by the straps
Also the amount of work that it puts on IT, implementing new tech and not providing/approving the training (which only goes so far)
In medicine, when a big breakthrough happens, we hear that we could see practical applications of the technology in 5-10 years.
In computer technology, we reach the same level of proof of concept and ship it as a working product, and ignore the old adage “The first 90% of implementation takes 90% of the time, and the last 10% takes the other 90%”.
Which adds up to 180%. And that is all you need to know about deadlines.
Yup, a complete 180
The same deadlines from MBA chuds who think nine women can birth a child in a month.
Because medicine employs a little technique called “ethics,” and there’s a strong ethical argument for restricting AI to research purposes only and completely outlawing any practical deployments, at least until the implications are fully understood.
AI may very well be the nuclear WMDs of our time, and we’re letting everyone play with it like a high school chemistry set.
It’s almost like medicine that goes into your body is very different from apps on the App Store. But other than that, yes, very interesting observation Cerevant@lemmy.world!
Yeah, let’s talk about self driving cars…
Yes, let’s.
Do you want me to Google if they are statistically safer than human drivers per mile, or should you?
Just sounds like business as usual when high ups who will never use the software and have no idea what its like to actually do the job decide to purchase software packages that are very pretty and have lots of graphs but function like absolute shit. Experiencing that right now. Can only have one tab open or it starts malfunctioning. How am I supposed to view two tickets at the same time huh? Guess all my work is going to take 4x as long now
I’m just barely escaping a situation like that at my work.
Vendor sold the package as a drop in turnkey solution that had premade integrations with almost everything else we have. Get our input data formatted right and everything would “just work”.
They literally had never encountered a hybrid AD or exchange environment (part on-prem part in the cloud, extremely common) and I just finished bridging the gaps in their “premade integration” with those. Took nearly 6 months, requiring extra stuff in azure and scripted automation on prem. One of the three main “drop in” things we bought it for.
And we still aren’t really done. Apparently they had never encountered any places with legal requirements for data retention either. So we’re having to custom build everything for that and turn off their “clean up old data” functionality.
Bastards keep trying to set timelines on our side too and we have to keep reminding them who’s paying who.
I can’t stand those companies who effectively pitch these braindead implementations to management. I’m fighting a $50k “turnkey zero trust” implementation this month.
I’m just barely escaping a situation like that at my work.
Vendor sold the package as a drop in turnkey solution that had premade integrations with almost everything else we have. Get our input data formatted right and everything would “just work”.
They literally had never encountered a hybrid AD or exchange environment (part on-prem part in the cloud) and I just finished bridging the gaps in their “premade integration” with those. Took nearly 6 months.
And we still aren’t really done. Apparently they had never encountered any places with legal requirements for data retention either. So we’re having to custom build everything for that and turn off their “clean up old data” functionality.
Bastards keep trying to set timelines on our side too and we have to keep reminding them who’s paying who.
All of these tools need their hand held. So anything they generate or do still needs to be checked by humans that have their own separate workload to worry about.
Agreed. Most of the devs I’ve seen at work that use it aren’t checking anything, and as a result their code is even more garbage than normal.
I use GitHub Copilot and will bounce code questions off of ChatGPT, but I never copy/paste. I’ll iterate through small bits and usually have an idea of approach before asking it anything.
Yeah, it only enhances productivity when you either have the time to check it or you have it generate something you can visually check as it is generated. I sometimes have it generate code for me when I’m working with known/studied libraries.
I really wish MBA programs and journalism schools would start teaching that technology doesn’t progress linearly (much less exponentially) forever. “Look what it can do today! Imagine in 5 years!” Is there still low-hanging fruit to pick? Because if not, it might be as good as it’s gonna be for awhile.
There’s obviously exceptions where things have gotten steadily better for a very long time. But often, it’s a punctuated equilibrium situation around major scientific advancements. And way more often, business realities pause advancement. (Like maybe OpenAI’s next giant leap forward will have to wait on chip suppliers to expand capacity.)
I remembered when chat GPT first came out, the torrent of dweebs posting AI responses to questions as if they are interesting. One even tried to argue with me using a chat GPT response.
Hopefully, the novelty will wear off, and people will get the message that AI just isn’t that interesting.