Hopefully this sets a precedent for other companies thinking of replacing their employees with language models
It’s setting the precedent that I’m trying out every chat bot owned by a company to get free shit now.
Companies are about to find out just how expensive it is to remove front line labour.
Companies are about to find out just how expensive it is to remove front line labour.
They don’t care. The executives that made this decision already got their bonus for it. If they have to retrench, they’ll simply try this again in a few years.
Corporate and stock-market incentive structures are…perverse. They incentivize very short horizons, usually a quarter or at most a year. We’d have a much less sick society if those in charge weren’t allowed to realize gains for at least five to ten years.
And yes, I’m aware that communism worked on the idea of a five- or ten-year plan and it had problems with dealing with short-term supply-chain issues. Market-based solutions work great for things like warehousing, logistics or distribution because the feedback is immediate and the costs aren’t externalized. Where the costs are external and long-term, but the profit in realized in the short term, market solutions fail.
I think that with people constantly figuring out how to game the GPT chat bots, if we see a few more rulings like this where companies are liable for the chat bot’s responses, we’ll see a shift back towards “dumb bots” where there’s explicit control over the responses. If people realize you can get free stuff just by manipulating a chat bot, and a company is liable for what the chat bot says…I just don’t think it’s tenable for them.
They wouldn’t have done it without crunching some numbers, but if they didn’t consider the system to be fallible, then it’s on them for not thinking it through. They’ll develop it more to get a better product, but it costs them money and ideally the cost will be more than simply having people with jobs doing the work.
Air Canada argued that it can’t be held liable for information provided by one its “agents, servants or representatives — including a chatbot.”
What a load of bullshit. If you speak to someone working for a company, they are an agent of the company and the company is liable for what they say. That’s true of a person or a chat bot - arguably it is more true of a chat bot or web page, as they have ample opportunity to ensure its responses are correct beforehand.
The take away from that story: document every interaction you have with a GPT-powered chatbot of a company you do business with.
And don’t call it AI, it’s “spicy autocomplete” at best, there’s nothing intelligent about it.
Well I think it’s more intelligent than a beetle
Let’s find the loopholes in all these customer service chat, boys!
Hey, if you end up with free stuff, it’s their employee’s fault. 🤗
Good