Hacker News new | past | comments | ask | show | jobs | submit login
Air Canada is responsible for chatbot's mistake: B.C. tribunal (ctvnews.ca)
371 points by brrrrrd on Feb 15, 2024 | hide | past | favorite | 420 comments



I just want to give a shoutout to BC's Civil Resolution Tribunal. They take their job seriously, and make it as easy as possible for plaintiff to submit a complaint.

I once had the misfortune of generating a batch of defective enterprise-grade SSD from a S company. That S company requires all RMA to go through the sales channel you bought the SSD from, but the sales company we used was out of business.

S has refused all attempts to RMA by stonewalling us saying that we need to return the drives thru the bankrupted company. When we explained that the company is bankrupted, S just ignored us. When we created a new RMA request, S's rep says we already have an open case, and that we need to return the drives blah blah blah.

After 5 months, in a fit of rage I typed up a 2000 words complaint, gathered all the emails/phone calls/photo evidences, and submitted a complaint to CRT ($75 fee). I wasn't expecting much, but within 3 weeks I got a call from a corporate lawyer in S company's Toronto office, asked me for the situation, apologized profusely, and asked if I can drop the case if they RMA all affected SSDs.

That day was great, to say the least.

Aside:

The CRT posts all their cases (that reached arbitration) here- https://decisions.civilresolutionbc.ca/crt/en/nav.do

Reading the cases is quite am entertaining time passer.


BC CRT is great when the happy pathway happens. They send legal letters to all parties so they are able to arbitrate, and that letter might be enough to get things resolved.

I had the misfortune of trying to use them for a company which had just stopped responding and in the end even though I did get the default judgment in my favour, actually enforcing the judgment still required me to go through the normal courts (which in my case was not worth the cost). But the process of dealing with CRT was nothing short of delightful.


Hmm, browsing through some of those cases, I'm starting to notice a pattern of Air Canada not taking these tribunal proceedings entirely seriously.

From this case:

> I find that if Air Canada wanted to a raise a contractual defense, it needed to provide the relevant portions of the contract. It did not, so it has not proven a contractual defence. [...]

> In its boilerplate Dispute Response, Air Canada denies “each and every” one of Mr. Moffatt’s allegations generally. However, it did not provide any evidence to the contrary.

From https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5254...

> Despite having the opportunity to provide documentary evidence, Air Canada did not do so.

From https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5249... and https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5188...

> Having reviewed the evidence, I am satisfied, on the balance of probabilities, that [Air Canada] received the Dispute Notice and did not respond to it by the deadline set out in the CRT's rules.

From https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5230...

> Based on the proof of notice form submitted by the applicant, I am satisfied that [Air Canada] received the Dispute Notice and did not respond to it by the deadline set out in the CRT's rules.

(I also found a fun one that hinges on an Air Canada employee's apparent inability to do basic arithmetic: https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5225...)


That last one was great

> In his statement, Mr. Mackoff described distinct conversations he had with each employee, provided the supervisor’s name, and submitted the diagram he drew while trying to explain to the employees how to count the 10 calendar days. As Mr. Mackoff’s witness statement includes so much detail, and as Air Canada has produced no contrary statement, I accept that Air Canada refused to transport both Mr. and Mrs. Mackoff on February 15, 2022 and so breached its contract with them.


In the case at hand, it sounds like they decided they wouldn't spend over $800 to avoid losing another $800. Which is reasonable, I suppose.

(But another constraint of the CRT is that you can't bring representation -- so while it was an Air Canada employee involved, it wasn't their legal team.)


I see a lot of stuff like that in civil litigation in David v. Goliath situations.

Usually the large entity puts in little effort and relies on the fact that its (much more expensive) lawyers generally have more sway with the court (judge), are more persuasive even when their arguments are nonsense, and can just drag cases on for years until the smaller party is burned out.


One nice touch about BC's CRT is that neither side is allowed to be represented by lawyers without specific approval by the tribunal.


That's interesting. What does that mean in practice, do they send people that have a lot of legal training and experience, but aren't bar members?


Honestly not sure, but with only $800 at risk I can't imagine them even sending a paralegal.


Actually doing all of that would be quite expensive, so they don't and rely on people giving up or not knowing how to exert their rights.


S is such a disaster that I have established a personal policy of staying away from S and never buying their things (TV fridges phones SSDs printers… anything they make). This policy has drastically improved my sanity. They produce expensive junk and have no regard for quality or security/privacy. I’d still buy things that have S components in them but not whole S devices.


Yes - my house (was) full of Samsung appliances - Stove, Fridge, Microwave, Washer, Dryer.

All garbage. They are all falling apart now or became became uneconomically repairable within 5 years. Every single appliance repair businesses I called flat out wouldn't touch the fridge for example.. Apparently they don't provide service information or parts to 3rd parties (at least for the fridge).

I have moved onto a different brand, but waiting to see if it's any more reliable..


When we were purchasing a clothes washer and dryer, Samsung had a special promotion. The sales rep at the store told us that the Samsung machines got the most complaints and she would recommend the LG machines. But we wanted that promotion, it was oh so nice. We bought a 5-year warranty just in case.

Sure enough, it's year 3 and the washer has stopped working. Repair guy came and decided he needs to order new parts to fix it. It's been a week or so without doing any laundry. Glad we purchased the extra warranty, but maybe we should have gone with the LG like the sales lady recommended.


The long-term brand you want is Miele. They're not cheap but my parents' dishwasher is approaching 30 years old.


Without knowing anything particular about Miele, all this anecdote suggests is that they were great thirty years ago. They could well have enshittified between now and then.

I'm at the point where I don't trust any brands at all anymore. The next time I need to make a major appliance purchase I'll buy a subscription to Consumer Reports and blindly follow their recommendation - I still trust them.


But isnt that the crux of the matter? You buy what consumer reports say, and the reviewers have no way of knowing if it will break down in 3 years. No one rates their gadget after three years so we have a massive blind spot where the best thing is still word of mouth.

My parents bought an Miele washing machine, rock solid even after pushing ten years.


Yeah, totally. That's where branding used to be a valuable signal, under the assumption that a company wouldn't deliberately choose to destroy their long-term value. I don't believe that anymore, so I'll place what remains of my trust in reviewers I know are independent (God help us all if it turns out CR is taking kick-backs or something) and figure know more about, say, washing machines than I do.


Miele now has cheaper models so you may be right to be cautious.

Personally I have had issues with Bosch and don't trust them anymore.

The result is that now either I car about specific look, some specific features, etc and pay a bit more for them, or I just go for cheapest.


Apparently Miele has started to have quality issues. But they still might be a good bet, if only for the fact that they are (probably?) the last family run business in the market.


I did not know that! Thanks. Indeed, "family run", depending on where they are in the internal-to-the-family management-transition cycle, is more encouraging to me than "publicly held". ("Private equity" is always and everywhere a huge red flag.)

It's depressing to me that we have to think about those things. I mean, "buyer beware" has always been the case, but it seems like we have to be more wary (or more wary of more factors) than we did a decade or two ago. Or maybe I'm just getting older. I dunno.


It might be just the normal process of capitalism in which you see take-over after take-over leading to ever bigger companies?


"Capitalism" is very much in the eye of the beholder, and different regulatory models create market economies with different incentives. What you're talking about is "normal" in certain places.

I didn't mean that, though, and I don't think it's what the other people in this thread did, either. I was thinking of the practice whereby private equity funds purchase companies and exploit the "brand equity" they've built up over the long term, whilst deliberately enshittifying them, in order to make a short-term profit for the new owners. That's been normalized, in some places, but I wish it were not, and would prefer that financial markets be regulated in ways that make it un-profitable.


Get SpeedQueen next time. There’s still quality out there, need to stop listening to sales and do research.


I also have had the same experience with my Samsung appliances. I paid top dollar for a nice looking set of laundry machines only to find out that they have garbage components inside of them that are bad by design.

I had a squeaking drier fixed under warranty, only to have the same issue reoccur multiple times, because the rollers are just junk. It needs a new set like clockwork every year and a half or so, I have the replacement procedure memorized now.

The washer seems to be allergic to water and soap. I keep the unit in a dry location, leveled and raised off of the floor, yet, the body of the unit is rusting out, the chrome finish on the door is peeling, and when I clean it, the cycle labels wipe right off the front panel. The pump has also failed due to rust on the motor.

Absolute trash. I probably would have been better off with a $400 top loader.


> yet, the body of the unit is rusting out

Happened to my ~2016 Samsung. Turns out a hose clamp wasn't properly installed and water was dripping onto the steel floorpan and rusting everything nearby.

Fixed it and all rusting halted.


This isn't Harry Potter and they aren't Voldemort! Are all the replies correct in assuming this is Samsung? You can say their name.


What is a S company?


Samsung

I've also had to deal with their lack of Canadian RMA for 2 SSDs. Had to go back and forth with them and trying to convince Amazon and the Amazon seller to replace the defective drives.

Not buying any more Samsung memory products due to their essential non existent warranty in Canada


Buy through Memory Express if you've got a store nearby, just need to keep the receipt for the warranty period. They have a price matching policy for items that have the same SKU as well.

MemEx is an authorized dealer so they'll take it back and send it back to Samsung for warranty work if you buy from them.


Took about 5 months of endless back and forth to get them to replace a defective SSD in Australia too, despite very clear consumer law guarantees here obligating them to help. Absolutely hopeless company to deal with.


Seems like an abbreviation for a USD$400B+ company that manufactures SSDs, but with the commenter not specifically mentioning (presume, smartly, to avoid SLAPP lawsuit/libel).


Samsung


Could be Seagate and SanDisk too ...


I expected this to be SanDisk. I bought one USB from them and after a few writes it locked itself from writing further.

Apparently I'm not alone and the only fix is to throw the thing away.


Even as a smaller-government proponent, I've been a big proponent of things like the CRT.

One of the core functions of the government is enforcement of contracts. While there are the courts, they are out of reach for most people either due to skill level or financial constraints.

Having a simple, low cost, easily accessible way to resolve contract issues puts every member of society on a more even footing when it comes to economic interactions. If we're going to build our society based on capitalism and the ability for parties to enter into contracts for things like employment, buying/selling, housing, etc, having an efficient means to resolve disputes seems like a no-brainer.


Can we please name the company rather than make people guess?


If I am trying to interact with a company and they direct me to their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).

If I have to double-triple check elsewhere to make sure that the chatbot is correct, or if anything the chatbot tells me is non-binding, then what's the point of using the chat bot in the first place? If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?

If a company is going to provide a tool, they should take responsibility for that tool.


Yes, I think people underestimate the amount of imagined LLM use cases that require accurate responses. To the point that hallucinations will cost money in fines & lawsuits.

This is a new frontier in short sighted customer service staffing (non-staffing in this case). The people who are on the frontline communicating with customers can convert unhappy customers to repeat customers, or into ex-customers. There's a few brands I won't buy from again after having to jump through too many hoops to get (bad) warranty service.


It's not like human call center staff has never given anyone wrong information, or cost companies money in fines and lawsuits.

The bar LLMs have to clear to beat the average front line support operations isn't that high, as your own experience shows. And compared to a large force of badly paid humans with high turnover, LLMs are pretty consistent and easy to train to an adequate level.

They won't beat great costumer support agents, but most companies don't have many of those


>It's not like human call center staff has never given anyone wrong information, or cost companies money in fines and lawsuit

A human will be more likely to say "I don't know" or pass you along, rather than outright lie.


I find it common for human customer support people to give inaccurate information. I don't know about "outright lying", but I've had people tell me things that are factually incorrect.


Depends. Saying "thing X should not fail" is factually incorrect, when you called because thing X failed.

However I would not expect an airline customer support to make up a completely fictional flight that has never existed. Maybe they could confuse flights or read a number wrong, but making one up?


Humans won't fabricate too much but when confronted with yes/no questions and they have a 50-50 shot of being right and any blowback will likely be on someone else....they'll answer whatever to get you out of their hair.

Case in point, I asked my bank if they had any FX conversion fees or markup. Guy said no. I asked if there was any markup on the spread. Said no. Guess what? They absolutely mark up that spread. Their exchange rates are terrible. Just because there isn't a line-item with a fee listed doesn't mean there isn't a hidden fee in there. He's either incompetent or a liar.


Maybe... Over two decades ago (holy crap, I'm old), I used to work in the call center for a major airline. I was the guy you got when you complained to the 1st level rep that you wanted their manager, then you got sent to the 2nd level rep and wanted their manager. And that was me.

90% of my job was undoing and compensating passengers for the incorrect information either the phone agent or gate agent gave them. The other 10% was dealing with workarounds to technical issues in our booking software.


> It's not like human call center staff has never given anyone wrong information, or cost companies money in fines and lawsuits.

If a company representative told me in writing (perhaps via email support) that I could claim a refund retroactively, and that turned out to not be their policy, I would still expect the company to honor what I was told in the email.

Phone calls are difficult more because there is no record of what was said. But if I had a recording of the phone call... I'm not actually sure what I would expect to happen. It's just so socially unusual to record phone calls.


> It's just so socially unusual to record phone calls.

Is it? I can not remember the last time I called some business where I did not get a “this call may be monitored or recorded for quality and training purposes…”. whatever perceived social hangups the company had they got over them and you don’t even need to ask in a 2PC jurisdiction, it’s already taken care of, just record the call.


Which implies most humans calling any form of customer service should probably have a local audio recording, because otherwise you're in a lopsided witness situation. Don't want there to be a recording? That call just happened to not be recorded. Notably, this is getting difficult anyway with how easy it is to manufacture vocal data.


Sure, but there is such a thing as “the human element”. Humans aren’t perfect, and that is the expectation. That is not the same case with computers.

And especially for something where it’s just pulling data from an internal system. There is absolutely no reason to invent made up information and saying “well humans do it all the time” is just an excuse.


Yes, further, expectations wise..

On the phone with a customer service rep, I might understand a little wishy washy answer, slip of the tongue or slightly inaccurate statement. I've never really had a rep lie to me, usually its just I don't knows & them escalation as needed.

There is something about the written word from a company that makes it feel more like "binding statement".


It's still way too easy to send LLMs into a complete tangent of rambling incoherently, opening yourself up to the LLM making written statements to customers you really don't want.

I recently asked some LLMs "How many gallons in a mile?" and got some very verbose answers, which turned into feats of short story short stories when I refined to "How many gallons of milk in a mile?"


Only because the models have seemingly only been trained on generating text that matches a prompt, ie prompt completion. Rather than knowledge retrieval/parsing/organisation.

If part of the training was to only use knowledge sourced from a vector db and that it is allowed to use its trained knowledge only for grammar rules, phrasing or rewriting information then I think it would do a lot better.

Doesn't seem like many models are trained on prompts like "Question Q"->"[no data] I'm sorry but I don't know that" = accepted during training.

This would help immensely for not just for chatbots but for personal use too. I don't want my LLM assistant to invent a trip to Mars when I ask it "what do I have to do today" and my calendar happens to be empty.


I just tried the latter with gab’s AI and it was excellent.


The bar isn't even that high.

They only need to increase the lawsuit/settlement amount by less than the amount the companies saves by automation.


To me that’s totally fine. I don’t even particularly care whether the LLM is better or not. The only thing they really matters is if you are gonna use that LLM, when it inevitably messes up you don’t get to claim that it wasn’t your fault because computers are hard and AI is an emerging field. IDGAF. Pay up. The fact that you dabble in emerging technologies doesn’t give you any excuse to provide lesser services


Right whether you employ a person or some software which lies, the liability should be the same.


I think you're underestimating the quality of customer support. People are going to be out there testing every flaw in the support system, staffed or unstaffed. LLMs have no hope.


>that hallucinations will cost money in fines & lawsuits.

Sure. They are now out about $600. They probably already laid off 500+ customer service jobs costing conservatively 30k a year each. Not including mgmt,training,health,ect. I don't think it will make a difference to the ivory tower C levels. We will just all get used to a once again lower quality help/product. Another great "enshitification" wave of the future with "AI"

It also assumes that the customer service people dont make mistakes at a similar level anyway.

Another "new normal" How come anything that is "new normal" is never good?


> Another "new normal" How come anything that is "new normal" is never good?

If it allows them to reduce costs (and there's enough competition to force them to pass that on as reduced prices), I'm fairly happy with a new normal.

See also how air travel in general used to be a lot more glamorous, but also a lot more expensive.


> and there's enough competition to force them to pass that on as reduced prices

i found the bug.


Cynicism aside, air travel is one of the industries with pretty healthy levels of competition. (At least in Europe and South East Asia. I haven't spent much time in North America, so can't judge the market there.)

People love to hate eg RyanAir, but their effect on prices is felt throughout the industry; even if you never take a single RyanAir flight.


Yeah they pass those cost saving right onto record corporate profits for the last 20 years...


Huh? Airlines are notorious for being bad for investors.

(And even without looking up any data, I find your 'record profits for the last 20 years' hard to square with my memories of covid.)

EDIT: I tried to find some indices for airlines. The closest I found was https://finance.yahoo.com/quote/JETS/performance/ which didn't exactly have a stellar performance.

So I'm not sure where you get your claim from?


I wasn't referring to airlines specifically. I see how I was unclear now. We are in a decade+ era of record corporate profits yet incomes are stagnate and costs are rising.

Airlines are weird. I think warren buffet said something about airlines being the most complicated way to guarantee losing money as a business or something like that once.


Depends how long it takes to get the 600$ and whether you need to be a customer to get it. I know many people who would happily ask for that money once a week.


With RAG it's entirely possible to essentially eliminate 100% of hallucinations, given you are ok with responding with "I don't know" once in a while. These situations are likely coming from poorly implemented chatbot, or they decided that "I dont know" was not acceptable, and really that should be a queue to send you to a real human.


This claim seems wildly inaccurate, as even with GPT-4 in a single conversation thread with previous human-written answers included, a repeat of a similar question just resulted - in my testing today - in a completely hallucinated answer.

I think your claim might be based on anecdotal testing. (I used to have that same feeling after my first implementation of RAG)... Once you get a few thousand users running RAG-based conversations, you quickly see that it's "good enough to be useful", but far from being as dreamy as promised.


There are no guarantees with RAG either, and RAG only works when the answer to the question is already printed out somewhere explicitly in the text, otherwise it’s definitely prone to hallucinate


Yeah, RAG can't provide such guarantees. Moreover, even if the correct answer is printed somewhere, LLM+RAG still may produce wrong answer. Example from MS Copilot with GPT-4: https://sl.bing.net/ct6wwRjzkPc It claims that OnePlus 6 has 6.4-inch display, but all linked pages actually claim that it's 6.28. Display resolution and aspect ratio are also wrong in the response.


It's funny it seems to have a lot of trouble extracting tabular data, which arguably is one of the things I hear people trying to do with it..


do the people managing the chatbot know that though?

this shit gets sold as a way to replace employees with, essentially, just the middle manager that was over them, who is now responsible for managing the chatbot instead of managing people

while managers are often actually not great at people management, it's at least a somewhat intuitive skill for many. interacting with and directing other humans is something that many people are able to gain experience with outside of work, since it's a necessary life skill unless you're a hermit. furthermore, as a hedge against managerial ineptitude, humans are adaptable creatures that can recognize their manager's shortcomings and determine when and how to work around them to actually get the job done

understanding the intricacies training a machine learning system is a highly specialized and technical skill that nobody is going to pick up base knowledge for in the regular course of life. the skill floor for the average person tasked with it will be much lower than that of people management, and they will probably fuck up, a lot

the onus is ostensibly on AI system vendors to make their systems idiot-proof, but how many vendors actually do so past the point of "looks good enough to close the sale in a demo"? designing such a system is _incredibly_ hard, and the unfortunate reality is that if you try, you'll lose sales to snake oil salesmen who are content to push hokum trash with a fancy coat of paint.

these systems can work as a force multiplier in the hands of the capable, but work as an incompetence magnifier in the hands of the incapable, and there are plenty of dunning-krugerites lusting to magnify their incompetence


Well if incompetence is cheaper to implement, out come the lawyers to erase whatever perceived savings there were.


The fines and lawsuits may be way cheaper than human staff.


Especially once we have ai lawyers ;)


This initially sounded pretty good until I thought it through. Democratizing access to council and forcing troll lawyers to deal with trolling bots seems good but it will shape up like other spam arms races while legal systems gear up to deal with the ddos attacks. Good for spammers and most entrenched players, bad for the public at large.

Already we can’t manage to prosecute ex presidents in a timely manner before the next election cycle. If delays seem absurd now what will it be like when anything and everything remotely legal takes 10+ years and already sky-high costs triple?


Don't worry, AlphaJudge will provide swift justice at scale.


Judge, I'll refer you to my legal chatbot. I rest my case.


Your selection of generative counsel has been confirmed.

Please do not navigate away from this page while the trial runs. You will receive a notification when the verdict has been reached. This may take up to a minute.


Thank you for the best 2 paragraph cyberpunk story.


> then what's the point of using the chat bot in the first place?

The point is quite literally to make you give up trying to contact customer service and just pay them money, while getting their legal obligations as close to a heads-I-win, tails-you-lose situation as possible. That's not the mysterious part. The mysterious part is, why did they even let this drag into court for such a small sum?!


> "The mysterious part is, why did they even let this drag into court for such a small sum?!"

Because most people wouldn't bother taking it to court.

If they rolled over and paid up every time their chatbot made a mistake, that gets expensive, and teaches customers that they can easily be compensated if the chatbot screws up.

If they fight it tooth and nail and drag it all the way to court, it teaches customers that pursuing minor mistakes is personally painful and probably not worth it.

Scorched-earth defense tactics can be effective at deterring anyone from seeking redress.

It's the same fundamental reason why customer support is so hard to reach for many companies - if you make it painful enough maybe the customer will just not bother. A valuable tactic if your company imagines customers as annoying fleshy cash dispensers that talk too much. Having flown many times with Air Canada I can confirm that they do seem to perceive their passengers as annoying cash dispensers.


> Because most people wouldn't bother taking it to court.

Wait, couldn't they have tried to settle as soon as they realized it was actually going to court? I thought that was the modus operandi in the US... is it not a thing in Canada?


Well...they lost, and now it made the news. Are they going to keep the chatbot? Is the judge going to so lenient next time, now that there's precedent of wrongdoing?


To disincentivize anyone from calling them on their be in the future. If I know I will have to drag them through court just to get even a low payout I will be less likely to fight as its not worth the hassle.


Unfortunately, my impression is that human customer support often works just as well as a current-generation chatbot: They'll tell you what you want to hear, because they get rated by customer satisfaction. You get the survey, indicate that your request was resolved to your satisfaction, the agent gets their bonus... and a week later you realize everything you have been told was a lie.

This got so bad that when a customer support agent at Amazon genuinely resolved my issue well once, I was surprised that it actually worked out as promised.


Really depends on the company. Generally, for high quality on shore call centres, you do not use customer satisfaction as a metric for individual agents.

You’d use first contact resolution, average handle time, and their ability to stick to the flow they’re meant to (like transferring the customer to a survey after the call).

Like you say, satisfaction encourages lies. Much like sales commissions.


> If I am trying to interact with a company and they direct me to their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).

If I'm trying to interact with a company and they direct me to a chatbot, I expect to get useful help 0% of the time, because if help was available via a mechanism on their site I would already have found it. I expect a chatbot to stall me as long as possible before either conceding that I need a human's assistance or telling me some further hoop to jump through to reach a real human.


Honestly, that's pretty similar to dealing with front-line level 1 human support.


I have a slightly higher expectation that first-line tech support can solve my problem if the problem is "you really should have had a self-service way to do this on your website but I'm sure you have a tool for this".

And if that isn't the case, I've mostly found that contrary to stereotype, many first-line tech support people are not such rote script-followers that they can't deal with skipping most of the script when the problem is obviously on their end and going to need real human intervention.


This is true of all LLMs: you cannot trust a single thing they say. Everything needs to be checked - from airline fee information to code.

I expect we'll see this sort of thing a lot more in the future, and probably a bit of a subsequent reversal of all of the sackings of humans once the issues (... and legal liability!) becomes clearer to people.


My internet went down and I could only get a chat bot on the Web site or a hang up on the support line.

After the "estimated fix by ETA" came and went, I reported my ISP to the FCC. That resulted in a quick follow up from a real human.


Absolutely. That’s the big problem with the race to shoehorn generative AI into everything. If it has to be right the tools aren’t good enough yet.


> If you can't trust it 99% of the time

A chatbot should be either 100% or 0%. Companies should not replace humans with faulty technology.


Agree there. I put 99% as even human reps sometimes get it wrong, but in my experience whenever a human agent has made a mistake and relayed wrong info, the company would take appropriate steps to meet me at least half way.


Would this situation have been handled differently if a human support rep gave them incorrect information? I suspect they would have honored it and then put the rep (or all reps) through more training.

Another thought experiment: If a portion of the company's website was at least partially generated with an LLM, does that somehow absolve the company of responsibility for the content they have on their own site?

I think a company is free to present information to their customers that is less than 100% accurate -- whether by having chatbots or by doing something else silly like having untrained, poorly-paid support reps -- but they have to live with the risks (being liable for mistakes; alienating customers) to get the benefits (low operating cost).


I would say meet or beat human custom support agent accuracy, 100% is in many case not acheivable for machine or human.


then you can't have a chatbot

but if that is your standard, you can't have an airline either


but humans aren't 100% either... seems ridiculous to demand 100% from any implementation


If a human customer support person told me something and I made purchases based on that, and it turned out they lied, yeah I'd want recompense for that as well. You're allowed to be wrong (AI or human), you just have to face consequences for it.


I had that once with an airline, customer rep made promises and afterwards they refused

Coincidentally the audio recording of the conversation was apparently deleted …


A company is partially bound by their representatives actions, so humans can hit 100% despite making mistakes.

This is simply applying the exact same standards to a chat bot.


Maybe don't demand 100%, but instead responsibility for incorrect information.


If a human employee makes mistakes, the company will claim responsibility and in turn reprimand the human employee instead of claiming the human employee is its own "separate legal entity".


"In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.""

From https://www.cbc.ca/news/canada/british-columbia/air-canada-c...


IANAL, but it's astounding they took that as their defense, rather than pointing to a line (I hope?) in their ToS that says "This agreement is the complete terms of service, and cannot be amended or changed by any agent or representative of the company except by ... (some very specific process the bot can't follow)". I've seen this mentioned in several ToSs, I expect it to be standard boilerplate at this point ...


That does make sense, but on the flipside, let's say that they start advertising discounts on TV, but when people try to pay the reduced rate they say "according to our ToS that TV ad was not authorized to lower the price".

Obviously that wouldn't fly. So why would it fly with the AI chatbot's advertising discounts?


You'd normally expect a TV ad to be authorized to make offers.

You wouldn't normally expect an AI chatbot to be authorized to make offers. Its purpose is to try to answer common questions and it has been widely covered in popular media that they hallucinate etc.


I disagree. I expect any credible offer a company makes in an advertisement, on its website, using a chatbot, or through a customer service agent to be authorized by the company. Surely a corporation with lots of resources knows better than to program a chatbot to make fake offers; they'd get sued.

And they did get sued. Next time maybe they'll make sure software they connect to their website is more reliable.


> Surely a corporation with lots of resources knows better than to program a chatbot to make fake offers; they'd get sued.

They didn't program it to do that, it's a characteristic of the technology that it makes mistakes. Which is fine as the public learns not to blindly trust its answers. It seems silly to assume that people won't be able to figure that out. People are capable of learning how new things work.

This is like the people who set the cruise control in their car when it first came out and then climbed into the back of the car to take a nap. That's not how it works and the technology isn't in a state where anybody knows how to do better.


I agree with your cruise control analogy in a sense, but I think it's Air Canada that's misusing the technology, not the customer. If they try to replace customer service agents with chatbots that lie, they need to be prepared to pay for the results. I'm glad they're not allowed to use such unreliable, experimental technologies in their airplanes (737 Max notwithstanding).

There's absolutely a technology available to make a chatbot that won't tell lies: connect a simple text classifier to a human-curated knowledge base.


> If they try to replace customer service agents with chatbots that lie, they need to be prepared to pay for the results.

The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up.

> I'm glad they're not allowed to use such unreliable, experimental technologies in their airplanes (737 Max notwithstanding).

If you use unreliable technology in an airplane, it falls out of the sky and everybody dies. If you use it in a chatbot, the customer can e.g. go to the company's website to apply for the discount it said exists and discover that it isn't there, and then be mildly frustrated in the way that customers commonly are when a company's technology is imperfect. It's not the same thing.

> There's absolutely a technology available to make a chatbot that won't tell lies: connect a simple text classifier to a human-curated knowledge base.

But then it can only answer questions in the knowledge base, and customers might prefer an answer which is right 75% of the time and can be verified either way in five minutes than to have to wait on hold to talk to a human being because the less capable chatbot couldn't answer their question and the more capable one was effectively banned by the government's liability rules.


> The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up.

If, instead of a chatbot, this was about incompetent support reps that lied constantly, would you make the same argument? "We can't hire dirt-cheap low-quality labor because as company representatives we have to do what they say we'll do. It's so unfair"


It isn't supposed to be a company representative, it's supposed to be a chatbot.

If Microsoft puts ChatGPT on Twitter so people could try it, and everybody knows that it's ChatGPT, and then it started offering companies free Windows licenses, why should they have to honor that? It's obvious why it might do that but the purpose of letting people use it wasn't so it could authorize anything.

If the company holds a conference where it allows third party conference speakers to give talks, which everybody knows are third parties and not company employees, should the guest speakers be able to speak for the company? Why would that accomplish anything other than the elimination of guest speakers?


> The result would be a de facto ban on AI chatbots

No, the result would be a de facto ban on using them as a replacement for customer service agents. I support that for the time being since AI chatbots can't actually do that job yet because we don't know how to keep them from lying.

They could put a disclaimer on it of course. To be sufficiently truthful, the disclaimer would need to be front and center and say something like "The chat bot lies sometimes. It is not authorized to make any commitments on behalf of the company no matter what it says. Always double-check anything it tells you."


> No, the result would be a de facto ban on using them as a replacement for customer service agents.

But what does that even mean? If Ford trains a chatbot to answer questions about cars purely for entertainment purposes, or to get people excited about cars, a customer could still use it for "customer service" just by asking it questions about their car, which it might very well be able to answer. But it would also be capable of making up warranty terms etc., so you've just banned that thing and anything like it.

> I support that for the time being since AI chatbots can't actually do that job yet because we don't know how to keep them from lying.

It's pretty unlikely we could ever keep them from lying. We can't even get humans to do that. The best you could do is keep them on a script, which is the exact thing that makes people hate existing human customer service reps who can't help them because it isn't in the script.

> To be sufficiently truthful, the disclaimer would need to be front and center and say something like "The chat bot lies sometimes. It is not authorized to make any commitments on behalf of the company no matter what it says. Always double-check anything it tells you."

Which is exactly what's about to start happening, if that actually works. But that's as pointless as cookie banners and "this product is known to the State of California to cause cancer".


It's all in how it's presented and should not be up to the customer or end-user to understand how technology running on the company's server, which might be changed at any time might behave unreliably.

I expect something that's presented as customer service not to lie to me about the rebate policy. As long as what it says is plausible, I expect the company to be prepared to cover the cost of any mistakes, especially if the airline only discovers the mistake after I've paid them and taken a flight. Compensating customers for certain types of errors is a normal cost of doing business for airlines, and the $800 CAD this incident cost the airline is not an exorbitant amount. The safety valve here is that judges and juries do test against whether a reasonable person would believe a stated offer or policy; I can't trick a chatbot into offering me a billion dollars for nothing and get a court to hold a company to it.

If Ford presents a chatbot as entertainment and makes it really clear at the start of a session that it doesn't guarantee the factual accuracy of responses, there's no problem. If they present it as informational and don't make a statement like that, or hide it in fine print, then it says something like "the 2024 Mustang Ecoboost has more horsepower than the Chevrolet Corvette and burns less gas than the Toyota Prius", they should be on the hook for false advertising to the customer and unfair competition against Chevrolet and Toyota.

Similarly, if Bing or Google presents a chatbot as an alternative to their search engine for finding information on the internet, and it says "Zak's photography website is full of CSAM", I'm going to sue them for libel.


> The safety valve here is that judges and juries do test against whether a reasonable person would believe a stated offer or policy; I can't trick a chatbot into offering me a billion dollars for nothing and get a court to hold a company to it.

Sure, but a billion people could each trick it into offering them $100, which would bankrupt the airline.

> they should be on the hook for false advertising to the customer and unfair competition against Chevrolet and Toyota.

But all you're really doing is requiring everyone to put a banner on everything that says "for entertainment purposes only". Because if something like that gets them out of the liability then that's what everybody is going to do. And if it doesn't then you're effectively banning the technology, because "have it not make stuff up" isn't a thing they know how to do.


Courts probably aren't going to enforce any promise of money for nothing or responses prompted by obvious trickery, but they might enforce promises of discounts, and are very likely to enforce promises of rebates as the court in this case did.

If that means companies can't use chatbots to replace customer service agents yet, so be it.


> Courts probably aren't going to enforce any promise of money for nothing or responses prompted by obvious trickery, but they might enforce promises of discounts, and are very likely to enforce promises of rebates as the court in this case did.

But what does that matter? So someone posts on Reddit how to trick the chatbot into offering a rebate and then 75% of their customers have done it by the time they realize what's going on and now they're out of business.

> If that means companies can't use chatbots to replace customer service agents yet, so be it.

You're still not articulating any way to distinguish "customer service" from any other functioning chatbot. A general purpose chatbot will answer customer service questions, so how does this not just ban all of them?


And if I saw that disclaimer, I wouldn't use the tool. What's the point if you can't trust what it says. Just let me talk to a human that can solve my issue.


> What's the point if you can't trust what it says. Just let me talk to a human that can solve my issue.

That's the point of it -- you don't have to wait on hold for a human to get your answer, and you could plausibly both receive it and validate it yourself sooner than you could get through to a human.


> The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up

I think that banning lying to customers is fine.


ChatGPT is presumably capable of making something up about ChatGPT pricing. It should be banned?


> They didn't program it to do that, it's a characteristic of the technology that it makes mistakes

It sounds like you meant to say that they didn’t _intentionally_ program it to do that. They didn’t find the system under a rock and unleash it on the world; they made it.


Most of these companies didn't make it, they took an existing one and fed it some additional information about their company.


So what?


Are you asking me or the OP?


You, obviously.


I would expect that if I was talking to an official tool that the company provides to interact with to be authorized to give me information (including discounts and offers) to be accurate and true.


What would it take to disabuse you of that notion now that your expectations have been observed to be in conflict with reality?

What you're describing isn't what you expect, it's what you wish were the case even though you know it isn't.


> Its purpose is to try to answer common questions

Yes. And therefore people should be able to assume that the answers are correct.

Some people have heard of ChatGPT, and some of those have heard that they hallucinate, sure. But that's still not that many people. And they don't know that a question answering chat bot like this is the same technology!


> And therefore people should be able to assume that the answers are correct.

Why is that a necessary requirement? Something can be useful without it being perfect.


If I am trying to interact with a company and they tell me to use their chatbot, I expect that chatbot to provide me with accurate answers 100% of the time (or to say it can't help me in the event that I ask a question that it's not meant to solve, and connect me to a representative who can).

If I have to double-triple check elsewhere to make sure that the chatbot is correct, then what's the point of using the chat bot in the first place? If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?


This is why I’m a bit vexed by all the hype around LLMs. It reminds me of talking to a friend’s mother who was suffering from dementia - she could have a perfectly lucid conversation with you and then segue into stories that were obviously fictions that existed only within her head. She was a nice lady, but not someone who you would hire to represent your company; she was considered disabled.

Awhile back another commenter called them a “demented Clippy” which about sums them up for me.


Yeah totally. LLMs have a lot of awesome use cases. But as chatbots, they need a lot of guardrails, and even then, I'm highly skeptical if they improve the experience over a simple searchable FAQs or docs.


> If I have to double-triple check elsewhere to make sure that the chatbot is correct, then what's the point of using the chat bot in the first place?

Because you can ask it a question in natural language and it will give you an answer you can type into a search engine to see if it's real. Before you didn't know the name of the thing you were looking for, now you do.

> If you can't trust it 99% of the time, or if the company says "use this, but nothing it says should be taken as fact", then why would i waste my time?

The rate at which it makes stuff up isn't 99%, is the point. For common questions, better than half of the answers have some basis in reality.


> You wouldn't normally expect an AI chatbot to be authorized to make offers.

I think only software engineers would think this. I don't think it is obvious to a layperson who probably has maybe never even used one before.


Of course language models can't be trusted, but it's not the customer's problem to think about chatbot's purpose, how it's implemented and whether it hallucinates or not.


If it was approved by the company, yes. But you wouldn't want Braniff Airlines to put out an ad for SouthWest Airlines advertising rock bottom lifetime tickets and have those be valid...


Courts often rule that you can’t use ToS to overcome common sense. ToS are not a get out of jail free card if your company just does stupid things.


How do those clauses actually work? If a rep does something nice for you (like give you something for free), could the airline say it never agreed to that in writing or whatever and demand it back? How are you supposed to know if a rep has authority to enter into an agreement with you over random matters?

But, to your question, my guess is that would basically be telling people not to avoid their chatbot, which they don't want to do.


It's more to shield them from cases like a rep gifting you free flights for life.


I realize the intention but I'm wondering how it works legally given what the terms actually say.


What you are or aren't entitled to is written down in the terms of service. Support agents can only help interpret the terms for you. They may be authorized to go beyond that to some degree, but the company will also have the right to reverse the decision made by the agent.


I guess the original issue pointed by the judge would still stand: how am I supposed to know which terms are to be assumed true and valid? Why would I assume a ToS hidden somewhere (Is it still valid? does it apply to my case? Is it relevant and binding to my jurisdiction?) is to be considered more trustworthy than an Air Canada agent?


How is that enforceable? In many cases this is carte blanche for company representatives to lie to you. No one is going to read the ToS and cross reference it with what they're being told in real time. Moreover, if a customer was familiar with the ToS they would not be asking questions like this of a chatbot. The entire idea of having a clause like this while also running a "help" chatbot that can contradict it seems like bad faith dealing.


Those ToS statements overreach their capabilities a lot of the time. They're ammunition against the customer, but don't always hold up in the legal system.


Beyond the chatbot's error and the legal approach they took, this bad PR could have been avoided by any manager in the chain doing the right thing by overriding things and just giving him the bereavement fare (and then fixing the bot/updating the policy).


The claim is so outrageous that I wish there were a way (I assume there probably isn't) for the company or the lawyers to have been sanctioned outside what the plaintiff was asking for.


Straight out of I, Robot.


How is this different from me getting one of my friends to work at Air Canada and promise me a billion dollars to cancel my flight?

Will Air Canada be legal for my friend going against company policy?


That's fraud because you're in cahoots with your friend.

If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

Anyway, the chat bot has no agency except that given to it by AC; unlike a human employee, therefore, its actions are 100% AC actions.

I don't see how this is controversial? Why do people think that laws no longer apply when fancy high-tech pixie dust is sprinkled?


And if a random AC employee said[0] they'd give you a billion dollars, you wouldn't be entitled to it because any judge or jury hearing the case would say a reasonable person wouldn't believe that. Unlike computers, which do exactly what they're told[1], the legal system applies sanity checks and social context.

[0] perhaps because they're disgruntled and trying to hurt their employer

[1] generative models are not an exception; they're a way of telling computers to generate text that sometimes contains falsehoods


> And if a random AC employee said[0] they'd give you a billion dollars, you wouldn't be entitled to it because any judge or jury hearing the case would say a reasonable person wouldn't believe that.

I'm sure that if the bot had said that the airline would raise your dead relative from the grave and make you king of the sky or something equally unbelievable the courts wouldn't have insisted Air Canada cast a crown and learn necromancy.


>If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

The company would be entirely within their rights to say 'this employee was wrong, that is not our policy, goodbye!'. This happens all the time with more minor incidents.


No idea about the US but this very same case was tested in France and some part of Germany in the late 90s when some PayTVs (Sky or Canal+, can't remember) tried to cancel multiple subscriptions offered with an extremely aggressive pricing by some of their agents. Courts concluded that the signed agreements superseded the official pricing and they had to offer the service for the entire length of the original subscription.


The difference is that was a signed agreement.

This chatbot merely said something was possible, no legally binding agreement occured.


Where I live, "meeting of the minds" is necessary for a contract. Written or not. In this case, that meeting didn't happen. Due to the bullshit generator employed by Air Canada.

So there was no contract but a consumed flight. The court has to retroactively figure out a reasonable contract in such cases. That Air Canada couldn't just apply the reduced rate once they learned of their wrong communication marks them as incredibly petty.


That's far less likely to be true if the customer buys something based on the employee's erroneous statement. I suspect in an otherwise-identical case with a human customer service agent, the same judge would have ruled Air Canada must honor the offer.


Because their source of income depends on sprinkling fancy high-tech pixie dust!


> If a random AC employee gave you a free flight, on the other hand, you'd be entitled to it.

And if it was a billion dollars?


A random AC employee drinks too much and says "You are entitled to free flights for the rest of your life." Is Air Canada liable?


Since when are contracts enforceable when one party is drunk?


A random AC employee who is having a bad day and hates his employer says "You are entitled to free flights for the rest of your life." Is Air Canada liable?


No, because that's not "reasonable". My dad jokes that he's made a career off of determining what is "reasonable" and what isn't, and he's a contract attorney.

If you were standing at the customer service desk, and instead they said: "sorry about the delay, your next two flights are free", then all of a sudden this is "reasonable".


No, because no reasonable person would think that they had the authority to authorize that. Remember, the legal system is not computer code - judges look at things like intent and plausibility.


Valid contracts usually require consideration


The legal concept is called "Apparent authority". The test is whether "a reasonable third party would understand that an agent had authority to act".

("Chatbot says you can submit a form within 90 days to get a retroactive bereavement discount" sounds perfectly reasonable, so the doctrine applies.)

https://en.wikipedia.org/wiki/Apparent_authority


>How is this different from me getting one of my friends to work at Air Canada and promise me a billion dollars to cancel my flight?

There is a common misconception about law that software engineers have. Code is not law. Law is not code. Just because something that looks like a function exists, you can't just plug in any inputs and expect it to have a consistent outcome.

The difference between these two cases is that even if a chat bot promised that, the judge would throw it out, because it's not reasonable. Also, the firm would have a great case against at least the CS rep for this collusion.

If your friend of a CS agent promised you a bereavement refund (As the chatbot did), even though it went against company policy, you'd have good odds of winning that case. Because the judge would find it reasonable of you to believe and expect that after speaking to a CS rep, that such a policy would actually get honored. (And the worst that would happen to the CS rep would be termination.)


Likely because the claim was considered to be within the reasonable expectations of real policy.


Law and justice is not like a computer program that you can exploit and control without limits by being a hacker.

If the chatbot told them that they'd get a billion dollars, the courts would not hold Air Canada responsible for it, just as if a programmer put a decimal wrong and prices became obviously wrong. In this case, the chat bot gave a policy within reason and the court awarded the passenger what the bot had promised, which is a completely correct judgement.


This argument seems overly dramatic and distorted. Yes, in an outrageous situation like a billion-dollar mishap, most people would know something isn't right. But for a policy that appears legitimate, especially when it's replacing a human customer service rep, it's not that obvious. In these cases, Air Canada should definitely be held accountable.


Yes, that's exactly what I'm saying as well. Especially since they had already taken the customer's money.


The computer only does what they told it to.

What they told it to do was to behave very unpredictably. They shouldn’t have done that.


Not these ones...

These ones do what they "learned" from a lot of input data using a process that is us mimicking how we think brains could maybe function (kinda/sort off with a few unbiological "improvements").


Yes, these ones. Somebody told the computer to do all those things you just wrote.


Maybe this makes the point better:

Say your webserver isn't scaling to more than 500 concurrent users. When you add more load, connections start dropping.

Is it because someone programmed a max_number_of_concurrent_users variable and a throttleExtraAboveThresholdRequests() function?

No.

Yes, humans built the entire stack of the system. Yes every part of it was "programmed", but no this behaviour wasn't programmed intentionally, it is an emergent property arising from system constraints.

Maybe the database connection pool is maxed out and the connections are saturating. Maybe some database configuration setting is too small or the server has too few file handles - whatever.

Whatever the root cause (even though that cause incidentally was implemented by a human if you trace the causal chain back far enough) this behaviour is an almost incidental unintended side effect of that.

A machine learning system is like that, but more so.

An LLM, say, is "parsing" language in some sense, but ascribing what it is doing to human design is pretty indirect.

In a way you typing words at me has in some way been "programmed" into you by every language interaction mankind has had with you.

I guess you could see it that way, but I don't think it's a particularly useful point of view.

In the same way an LLM has been in directly "programmed" via it's model architecture, training algorithm and training data, but we are nowhere near the understanding of the process to be able to consider this "programming" it yet.


This is different from a bug or hitting an unknown limitation—the selling point of this was “it makes shit up” and they went “yeah, cool, let’s have it speak for us”.

Its behavior incorporates randomness and is unpredictable and hard to keep within bounds on purpose and they decided to tell a computer to follow that unpredictable instruction set and place it in a position of speaking for the company, without a human in between. They shouldn’t have done that if they didn’t want to end up in this sort of position.


We agree that this is an engineering failure - you can't deploy an LLM like this without guardrails.

This is also a management failure in badly evaluating and managing the risks of a new technology.

We disagree in that I don't think that its behaviour being hard to predict is on purpose: we have a new technology that shows great promise as tool to work with language input and outputs. People are trying to use LLMs as general purpose language processing machines - in this case as chat agents.

I'm reacting to your comment specifically because I think you are evaluating LLMs using a mental model derived from normal software failures and LLMs or ML models in general are different enough to make that model ineffective.

I almost fully agree with your last comment, but the

> they decided to tell a computer to follow that unpredictable instruction set

reflects what I think is now an unfruitful model.

Before deploying a model like this you need safeguards in place to contain the unpredictability. Steps like the following would have been options:

* Fine-tuning the model to be more robust over their expected input domain,

* Using some RAG scheme to ground the outputs over some set of ground truths,

* Using more models to evaluate the output for deviations,

* Business processes to deal with evaluations and exceptions, Etc


No, if you conspire with your friend to get them to tell you an incorrect policy, then you have no reasonable expectation that what they tell you is the real policy. If you are promised a billion dollars even without a pre-existing relationship with the agent, you have no reasonable expectation that what they are promising is the real policy because it's an unbelievably large amount.

If you are promised something reasonable by an agent of the company who you are not conspiring with, then the company is bound to follow through on the promise because you do have a reasonable expectation that what they are telling you is the real policy.


Your example is significantly different.

The chatbot instructed the passenger to pay full price for a ticket but stated they could get a refund later. That refund policy was a hallucination. The victim her just walked away with a discounted ticket as promised not a billion dollars.


Because that would not be reasonable, and nobody would be surprised if Air Canada reneged on that. See, for instance, Leonard vs. PepsiCo.

If your friend promised you something reasonable in the course of carrying out their duties, and you honestly believed them, I think that would be legal and enforceable just as this case suggests.


> How is this different from me getting one of my friends to work at Air Canada

One major difference is the AI wasn't your friend, another is that you didn't get it hired at Air Canada, another is that the promise wasn't $1B, etc...


Your friend is not trained by Air Canada. The bot is Air Canada property.

If they decide it is reliable enough to be put in front of the customer, they must accept all the consequences: the benefits like having to hire less, and the cons, which is that they have to make it work correctly.

Otherwise, woopsy, we made our AI handle our accounting and it cheated, sorry IRS. That won't fly.


No, it is more similar to Air Canada hiring a monkey to push buttons to handle customer complaints. In that case, the company knows (or should know) that the given information may be wrong, but accepts the risk.


AI has auth from higher up to be used 1:1 as truth. Your friend does not have auth to promise you 1 billion in their employers name.


Chatbots aren't people and people are actually separate legal entities responsible for their own actions.


People working for companies are sometimes separate legal entities responsible for their own actions, and sometimes they act on behalf of the company they work for and the company is responsible for their actions. It depends.


A chatbot(computer) cannot be responsible for their own actions so the only half of the coin you have left is "the company is responsible for their actions."


Computers are inanimate objects and are not recognized as legal entities.

The legal system recognizes that people, or groups of people, are subject to legal authority. This is a story about a piece of software Air Canada implemented which resulted in them posting erroneous information on their website.


What you are describing is 1) fraud, 2) conspiracy, and 3) not a policy that a reasonable person would take at face value.

It is very different than if an employee were to, in writing, make a statement that a reasonable person would find reasonable.


Weird straw man...

So replacing all their customer support staff with AI that misleads customers is OK? That's pants on head insane, so why spend time trying to justify it.


You didn't get your friend to do it, an employee just decided to. There is no conspiracy.


Good. Company outsources customer “service” to a completely unreliable piece of software, which goes to show how much they care about their customers. And then they argue in court that customers shouldn’t even trust their own software! So essentially they have increased profits by cutting customer service jobs, replaced humans with a stochastic parrot, and now don’t want to be responsible for the drawbacks of this decision. They want all of the upside and none of the downside.


Most likely they will just add to the chat bot answers a "Please make sure to double check our policy at $URL - Only our policy applies".


Fortunately, the tribunal rejected that tactic in this instance.

The chatbot included a link to the detailed rules, which contradicted what the chatbot told the customer.

> "While a chatbot has an interactive component, it is still just a part of Air Canada’s website ... It should be obvious to Air Canada that it is responsible for all the information on its website ... There is no reason why Mr. Moffatt should know that one section of Air Canada’s webpage is accurate, and another is not."


I really hope that wouldn’t get them out of it. In that case, Air Canada would still be misrepresenting their policies. Misrepresenting Air Canada policies to induce a purchase of a ticket may not legally be fraud, but it certainly feels like it should be. It’s also hard for me to see how that argument would square with this reasoning from the article:

> "While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website," he wrote.


In their defense, I never seen an airline company caring about their customers... Otherwise they wouldn't be late 25% of the time.


Not necessarily so.

A couple weeks ago Oman Air cancelled the return leg of a long distance flight due to a change in schedule.

They offered to reroute me with the Qataris via Doha.

I preferred to cancel the (in principle non-refundable) flight and make different arrangements.

The money,~2'500$, was credited to my card 4 days later. Including fees paid for preferred seats.

It's a shame that they stopped service to my city. Beause it's a great airline, which always provided stellar service.


Those are your basic rights: You entered into a contract with the airline, and the airline failed to deliver. Of course you get your money back if the alternative solution is not satisfactory - whether or not the ticket was refundable doesn't even enter into it, as it was the airline that failed to deliver in the first place. That's not stellar service, that's just fulfilling their legal obligations.


I am not sure which airline represent your average experience; but in my experience, almost all airlines will fight to "your death" not to give you anything back. In the times they do (especially European), it's because there is a law that requires them to.

What happened to the OP is, therefore, unusual.


I think the point being made: It was refunded very quickly and without hassle. Other, less scrupulous / ethnical airlines were try more tactics to redirect to worse flight or delay refund (if at all -- "oh, it was a lost in our system").


Oman Air is trying to offer good service in a tight, extremely cut-throat market (the Middle-East) where its competitors are giants (Qatar and Emirates) or cash drunk (Saudi Airlines).

I flew through them last year. Check-in was a bit of a hassle but the plane/service and the layover were great. The price was very competitive (cheapest) and yet the plane was empty. I expected them to either fold or downsize.


My experience with United emirates was similar. Extremely generous refund policies and fast action.


I just can't believe it took a lawsuit to get an airline to fork over a few hundred bucks on something that was obviously their fault.


Yeah; this should've been well within some second-tier customer service manager's discretionary refund budget. Instead they've got a precedent-setting ruling that makes the chatbot a huge liability.


> Instead they've got a precedent-setting ruling that makes the chatbot a huge liability.

Good.


Recall when the Chevy dealership's chatbot agreed to sell a Chevy Tahoe for $1...

https://www.businessinsider.com/car-dealership-chevrolet-cha...


If Chevy dealerships had $1 bereavement Chevy Tahoes as a normal business practice, I think that one would've gone the other way.

In the Air Canada case, it was a clear good-faith effort to understand the rules of a fare he was legitimately entitled to.


I think eventually courts will decide that whatever a chatbot or employee says while working for the company is binding on the company, as long as there was no malicious intent on either the part of the employee or the customer.


Probably not "whatever", but anything that passes a "reasonableness" standard. It's totally reasonable to expect that if an airline offers bereavement fares that you could file the paperwork a month later because, like, there's a lot going on if you need a bereavement fare and getting the paperwork together can take time.

There are lots of things that an employee might say that would not be reasonable, even if they had no malicious intent.


Yeah, reasonableness is the usual requirement in law unless otherwise specified. The ruling we're talking about here uses reasonableness several times:

"Generally, the applicable standard of care requires a company to take reasonable care to ensure their representations are accurate and not misleading."

"I find Air Canada did not take reasonable care to ensure its chatbot was accurate."

"Mr. Moffatt says, and I accept, that they relied upon the chatbot to provide accurate information. I find that was reasonable in the circumstances."


Australian consumer law basically already does this (for employees) esp. in the context of e.g. assuring fitness for purpose etc.


If it hasn't happened yet for employees, why do you think it will happen for chatbots?


A clear "paper" trail.

What is described here do happen when employees send mails with explicit promises, but gets harder when only the company has proof of the exchange (recording of the call). Chatbots bridge that gap.


Some people do record calls to companies. I choose to understand "Calls may be recorded" as permission and say "Thank you", even if perhaps that's not always how it was intended. When calling small very sketchy companies (such as my last landlord before I bought somewhere to live) I ask for permission and hang up if they refuse. Oh you don't want your words recorded? Fine, you're getting a letter, hope you weren't in a hurry.

But Chat Bots often provide a "Save transcript" feature, or even default to emailing you a copy if you're in a Customer Service type environment where it knows your email. So those are both a lot easier than setting up call recording.


Yes. iOS and default android are really doing a disservice by blocking call recording when in this day and age most real phone calls are done towards corporations or non private settings.


Yep. Intent matters.


> chatbot a huge liability

This is an appropriate outcome, in my view. I'm as pro-AI as they come. But I also recognize that without a clear standard of service delivery, and in an industry inundated with M&A instead of competition, that a chatbot isn't a helpful thing but a labor cost reduction initiative.


I hope they do become huge liabilities as it's irresponsible. I'm as excited as the next guy about the future, but companies shoehorning terrible "AI" to replace support channels is infuriating. The -only- good interaction I've had with a chatbot was the one Amazon uses or used to use, where you could click from a few options, and it'd just issue you a refund.


I wouldn’t exclude that it was made to escalate on purpose by those who are politically against the chatbot. Would have been a great move


And the issue here was purely procedural - we aren't talking about some damage they refuse to pay for, but merely some customer using the 'incorrect' process to claim something they are eligible for.


The problem with Canadian airlines (and Canadian companies in general), is that if you don't like the service there aren't really any meaningful alternatives. The two largest carriers have a near total stranglehold on the market (+75%) of flights. They practically have zero incentive to improve because they are propped up by protectionist Canadian policies that prevent foreign companies coming in with any meaningful competition. Their main competitor WestJet is similarly bad. Story of the Canadian economy really


I think they weren't worried about those few hundred bucks so much as precedent resulting in some future liability of millions of dollars when their chatbot misinforms tens of thousands of customers about something.


On the other hand, if they had quietly offered a one-time policy exception before reaching the point of a court verdict, they would have avoided any kind of precedent.


Wow that makes so much sense when you put it that way.


Probably because they felt confident they could set a precedent would be my guess.

Of course the argument is absurd, but this is exactly where companies like this would love to go. Virtually free support staff that incur the company absolutely no liability whatsoever.


Maybe companies are so used to their $multimillion dollar legal teams winning against broke Average Joes that their legal departments are starting to just assume that it will always happen and that all court precedents are going to be favorable. They've been sniffing their own farts for so long they think it's fresh air.


If you sue a company in small claims having an expensive legal team doesn't help them that much.


Could you elaborate?


Mostly all the expensive procedures aren't allowed, there isn't a jury so it moves faster, and in some places lawyers aren't allowed in court even to represent a business.


It's a signal that the company has internally configured itself in such a way as to insulate the administrative section from the operational section. They've likely done this in an effort to avoid the types of information that would force them to spend money.

It's literally just corporate ignorance as a business strategy. What's sad is, on the large, it works in their favor.

In any case, it's an equally good signal that you don't want to fly with them.


We just bailed them out too. And covid relief. And they have the worst customer complaint resolution record of all the airlines. You can't get them to do shit, they just take take take. It's a national embarrassment, right next to mobile data and internet. Yet our hockey teams play in arenas named after them, with their logo on their sweaters.


At least the outcome is favourable to the customer who was misled, and since Canada is a Common Law country, this ruling will establish a precedent for future corporate negligence with their chatbots in Canada.


Is there a country where no precedent is set by court rulings?

It seems a little weird to be able to (and also practically do) rule differently in the same situation


Civil/Continental Law jurisdictions AKA the actual world-wide (excl. Anglosphere) norm for legal systems, do not automatically follow precedent rulings: https://en.wikipedia.org/wiki/Civil_law_(legal_system)#Origi...

They may consider precedent rulings as factors in the decision, but those earlier rulings themselves do not automatically become law for all future cases on the same subject.


> world-wide (excl. Anglosphere)

Not quite that simple. The word we use is jurisprudentie but it means the same. Opening up the Dutch Wikipedia article and clicking on the English version of the article with thst name, you end up with "Case law, also used interchangeably with common law, is a law that is based on precedents"

I dove into this when I first heard of the difference between continental law and common law, and found it to be mostly a matter of wording. The principles are opposite but the effects very similar. It's not as though common law countries have no politicians making legislation, or as though there is no precedence in continental law countries.

Dutch foundational law (I think the very first article) says "everyone is treated equally given equal circumstances": such an equality principle would be incompatible with different rulings in identical situations. I imagine most countries have equality as a foundational principle, hence I'd be interested to learn: In which country would rulings not set precedence?


Precendent rulings and legislations are separate things in civil law countries. Not separating them is how the U.S. got the Roe v. Wade fiasco.

No two court cases are completely identical. Precedents are an important reference but they themselves do not automatically decide the outcome in civil law jurisdictions


Yeah seems really dumb to let it get that far. I wonder if they tried to settle, but Mofatt/lawyer (I would guess it's either small claims or a pro bono thing?) refused, preferring the publicity?


They wanted to set a precedent they could use when they replace customer service with shit spewing bots.


I hate the whole attitude the industry has towards AI models, it just seems so sloppy compared to software of the past

It’s like “yeah we don’t really know what it’s going to do and it might screw up but whatever just launch it”

Really surprised me that so many companies are willing to jump into LLMs without any guarantee that the output won’t be something completely insane


The purpose of AI, and the entire drive of investment in it, is to eliminate labor costs. There's no other purpose. It is not to advance science or make a better world, it is to make money. That money comes by the expense of the many and the gain of a few. And there will not be new jobs to replace the ones lost. There's no more "learn to code!". They're there to replace all jobs possible, make a more unequal society, and nothing more.

You'd best understand this today.


I agree with everything except "there will not be new jobs". There will most definitely be new jobs.


Like what?

AI is cool, but the spoils are going to go to staggeringly few people and in most of the West there are no real safety nets for people to fall on.

If I lost my job tomorrow to AI then any desk job I aim for might be gone in another 2-3 years, before I even finish retraining.

Any content creation is going to be flooded out and most creators don't even make any money even today. It's a marketing role with strong Pareto distributions.

That mainly leaves physical labor and person to person jobs.

I would say great, freedom from labour, except that's not how it works.


> AI is cool, but the spoils are going to go to staggeringly few people

They appear to mainly be going to Nvidia investors, which is basically anyone with a retirement account.


Can you define what you mean by "AI" here? I strongly disagree with your sentiment, but perhaps you and me have different ideas of what counts as AI.


Automation improvements increase employment/wages by increasing productivity. Conversely not doing it reduces them, because you become less competitive vs other countries that are doing it.


Its because the stakes have fallen just that low. Its a chat bot we are talking about. Those have been terrible since forever. Retooling your junk tool with the new popular junk tool at least signals to your shareholders you are willing to keep up with the joneses on the latest stupid thing, without much risk to your actual product at least not yet.


Our company is on the way to ‘ease’ the work for the customer by having them write a whole story to replace 4 buttons. We’ll have the AI figure out which of the four buttons need to be pressed based on the text.

I keep wondering how this could possibly be superior to just showing the 4 buttons…


Too much money on the line to bother with due diligence


This is the part I never got.

How does using a computer suddenly wash away any responsibility? Like if Air Canada's desk agents were all a separate company and they told the guy the wrong information isn't Air Canada still on the hook for training their sub-contractors?


At the end of the day humans make these screwups too (even in my own anecdotal experiences with dealing with airlines), and it doesn’t matter because there is no hook to be on in the first place. If they tell me x and come to find out the policy is actually y, I’m the one thats shit out of luck and should have done the due dilligence of thoroughly reading whatever legalese this is scrawled in. These ais are at least highlighting there are many segments of customer facing interactions where it just doesn’t affect the bottom line how poorly you treat these people. Flights especially are a captive audience, considering how airlines lord over their schedules and gate positions, where especially outside major airports there might not be any redundant options for you beyond one flight.


> If they tell me x and come to find out the policy is actually y, I’m the one thats shit out of luck and should have done the due dilligence of thoroughly reading whatever legalese this is scrawled in.

Why are you saying this as a comment to an article where literally the opposite thing happened?


>it just seems so sloppy compared to software of the past

Corporate software has always been sloppy especially if said corporation isn't centered around said software (true for Air Canada)) and the technology is in an early adopter stage (true for LLM chatbots).

The decision makers aren't well-versed in these technologies themselves, because getting to where they are did not require knowing how to properly use those technologies.


It's just proof companies don't care. The quicker they can turn their customers into little autonomous compute nodes, the better from their perspective.

I have also noticed an increase in automated call systems just flat out hanging up on me. As in: "We're experiencing higher than normal call volumes and cannot accept your call. Please call at a different time. Goodbye. <click>" How am I supposed to get help in such cases?

We've allowed companies to scale beyond their means, and they're getting away with more and more.

UPS destroyed a suitcase of ours and basically told us to go f ourselves. We could have sued in small claims court, but that's what they're betting on, that most people just give up.

And the chatbots are just terrible. And these days, the human representatives available have even less information than what the chatbots are provided with.


They are salivating at the idea of being able to get humans off their payroll.

The future is mass unemployment with all resources being split between a handful of trillionaires who control the AIs.


That wouldn't make sense though - once a sizable portion of the customer base is too poor to buy the services, there's no source of income anymore and the whole system collapses.


The key word is “once.” The doom of american society will not be in the timespan of a single business quarter. And on top of that the short position improves the whole way down. People at the top of the economy aren’t concerned about the rag being wrung dry because they will be the ones doing it, which will give them such a lopsided resource advantage it hardly will matter to them what happens next for the rest of us.


Economies aren't about "resources" or "wringing things dry", they are about trading. If you're not trading you don't have an economy and you aren't rich.


"The system" for you and me, not the system for those at the top. They'll maintain their wealth.

You're presuming the rich care about "the system". That they have morals or ethics. They do not.


>You're presuming the rich care about "the system". That they have morals or ethics. They do not.

Of course they have. You're parroting some low quality extreme left talking points. Rich people are people just like you and me, with their own motivations, goals and internal values. Dehumanizing them won't solve society's problems.


They don't have any wealth in the system you're projecting.

Wealth is the ability to trade with people. If there were somehow only 10 "employed" people in the world, then the economy is 10 people big, which means none of them are wealthy.

One way to see that this scenario is absurd is that it's literally the plot of Atlas Shrugged.


Well yeah that’s common sense but it hasn’t prevented that exact reason being behind all economic collapse (bleeding the host too fast, too greedily).


That only potentially makes sense at a societal level, not at an individual company level.

And the money saved with staff will end up in the stockholder's pockets, who will consume more.


It’s because despite what people are afraid of, these systems are very good at not screwing up with just a few minor controls layered on top of them.


My take away from this is that Air Canada would rather fight a customer in court than suffer a few hundred dollar refund as the cost of figuring out they have a flaw in the consistency of their system. (And in the face of an extreme unlikelihood of winning against prevailing common sense, too.)


Because if they win this, they get to keep the good and ignore the bad.

They get to automate a large chunk of their customer service and when the chatbot does something really stupid they can tap on the sign that says "We're not legally bound by our chatbot"


But anything output by your chatbot is no different from something you put onto a billboard, or into a flyer or any other communication. The idea that a discount offered by a chatbot is not valid in the same way as one written on a coupon is nonsensical.

Maybe they were thinking, if we spend a bit more money on lawyers, we can try this crapshoot.


Yes, that's what the ruling essentially said.

However, that's not what Air Canada wanted. And I'm not saying they should have won either. Just that, that's what they wanted.

Because if they can ignore the output of the chatbot when they want to, they can gut their customer service department.


i imagine that paying out would set a precedent or provide future discoverable evidence that they want to avoid. fighting (and winning) might have allowed them to continue their chat bot system without needing to change anything.


I think this has backfired enormously then. Wouldn't this case set a precedent?


I wonder if they really thought this would set a precedent that benefited them.


Quite a sensible judgement, it seems.

Pretty outrageous for the airline to try to claim the chatbot was its own legal entity.


The guy who bought the Chevy Tahoe for $1 [0] isn't going to get it, though. And that's also perfectly sensible.

But at some point we're going to see more cases in the grey area in between. What's the important difference?

In both cases I think the result would be the same if the chatbot had been a human. GM doesn't have to honour every promise a sales rep makes, even if that rep is nominally entitled to enter into contracts for the company - otherwise someone might agree to sell their whole stock to an accomplice for $1. The same applies to Air Canada, my buddy there can't "advise" me I get free flights for life and have that honoured.

So where is the line? Is it about good faith by the customer, or about what a reasonable person might think the company would offer them?

[0] https://twitter.com/ChrisJBakke/status/1736533308849443121


I think there is already established case law that a deal can be too-good-to-be-true. If you see a 90" TV on Amazon for $1, even if the order went through, Amazon is within their rights to cancel it as a mistake in pricing.


I’ve seen before “too good to be true” offers that were actually scams. They offer you an (allegedly worth $99) hair dryer for only $3.95. In the fine print you don’t read, it says you are signing up to a monthly subscription (to what????), with an introductory low price of $3.95 the first month and $39.95 a month thereafter. They are hoping you won’t notice the recurring charge. And you never get the hair dryer (or at least, one person I know who got scammed this way never did-the bank even refused to reverse the initial $3.95 charge, since it had been authorised through their mobile app, although they did reverse the subsequent ones.)


Quick search found this article discussing it : https://scholar.smu.edu/smulr/vol75/iss2/9/


Somehow, I bet if the sales rep or a chatbot convinced the guy to pay $200K for a Chevy Tahoe, the full force of GM's legal team would ensure the customer was held to that. But, when it goes the other way, and the sales rep or chatbot is convinced to sell it for $1, suddenly it's not "sensible."


> GM doesn't have to honour every promise a sales rep makes, even if that rep is nominally entitled to enter into contracts for the company - otherwise someone might agree to sell their whole stock to an accomplice for $1

The law doesn't protect a company in the world you laid out, internal compliance and controls do. A sales rep in a company with bad controls may well do exactly what you laid out.


Are you sure consideration[1] and unconscionability[2] don't form the basis for that type of legal protection?

1. https://www.law.cornell.edu/wex/consideration

2. https://www.law.cornell.edu/wex/unconscionability


I found this article which indicates this legal protection relates to the doctrine of unilateral mistake: https://scholar.smu.edu/smulr/vol75/iss2/9/


Thank you. It seemed like there would be some kind of legal principle at play.


This is nonsense. Can you imagine the chaos of a world where what you said is even close to true?

I'm a cashier at Walmart. One day I sell to you for $100 - not just everything in the store, but the building, the local distribution centre and even the corporate headquarters.

And in your view - Walmart has no legal recourse against me or against you? They should just peacefully vacate the buildings and hand over the keys? Their remedy to this is to discipline me according to their internal controls - maybe put me on a PIP and remind me that the company handbook forbids these deals?

No, this just isn't a deal that can happen. It's not about reasonableness, consideration or unconscionability - the deal is void even if the buyer agreed to pay $100 million. It's not about good faith on the buyer's side - it's void even if you thought I was the VP of Real Estate. I can't sell Walmart's property at any price, even though I'm otherwise empowered to do business on behalf of the company.


There's multiple ways to deal with those issues of course, but in those cases there is an actual legal entity who can be held accountable. If the chatbot on their website is its own legal entity, how exactly does one sue it, and who actually pays damages?

This judgement remains in the sensible part of law and in doing so, sidesteps a massive, unexplored, and highly problematic can of worms.


The line has been defined a long time ago, AI chat bots are considered regular customer service agents because no law has said otherwise, and they represent themselves as such, and we already know that reasonable things said by customer service agents can be taken as fact.


They want to have their cake and eat it too. To use AI to get rid of paying for labor, but also not assume any of the risks that go along with it.


At least they could claim it as a training issue.


The article doesn't say (imo an omission), but I assume the argument wasn't that level of stupid

More likely, they blamed a third party vendor that developed, configured, or hosts the bot

Which sounds like a similar situation to when your taxi breaks due to a mechanic's shoddy work: it's not the passenger's fault that your mechanic sucked, you were contracted to get them from A to B and may be on the hook if you stated you'd get them there on time. Here, it's not the user's fault that the chat bot was shoddy and stated something that they now don't want to fulfil. If AirCan wants to blame their vendor, they can go right ahead but this person has a right to this reduced flight price independently of whether AirCan gets the money from their vendor

But explaining all that instead of saying "haha they claimed the chat bot is an independent entity!" probably gets shared less (it's yesterday's top comment after all) and thus fewer conversions from website readers into subscribers


If they dragged the customer into court, they already screwed up.

They should have paid the customer immediately and then took it up with their vendor. If they want to take their vendor to court, they can do that separately.


Yeah, we all agree on that. I'm just saying the article appears to exaggerate it further for comedic effect


Surprised they didn't try and claim it was an independent contractor.


It really seems absurd because even if it was, isn't Air Canada responsible? Imagine it was a human employee who made the same mistake, surely it's still Air Canada's problem?

Anyway, let's take a moment to thank Air Canada for this progressive stand for the individual legal autonomy of artificial persons.


Companies should be responsible for the information they give customers, regardless of how they do it.

"Give me a real human" is usually what I say when it seems like I'm talking to a bot. Unfortunately, there have been times when I later discovered that the "bot" was actually a real human that was just acting like a bot!

While AI may seem to be improving, I always keep in mind the possibility that the opposite is also happening; and if you don't want your job replaced by a bot, perhaps you should not be acting like one.


>if you don't want your job replaced by a bot, perhaps you should not be acting like one.

Call center folks, especially the first couple of layers of them, are on scripts. They have decision trees and what to say written down. Bots can be much more dynamic than them. It's a pretty terrible job unless you're fairly uniquely predisposed to liking that kind of work.


This case is an interesting example where the LLM is _worse_ than that; it's not on rails in the same way a front-line call-centre worker would be, so can make stuff up with impunity. This would never have happened with a conventional scripted hotline; at worst it could only have happened if the caller was escalated to someone with the authority to make stuff up, but for a simple question like this they wouldn't have been.


It could be though. The LLM could just choose from options one through 10 of a list of prescribed responses, and an interposer could validate that the output of the LLM truly is an integer and then send that chosen message to the customer.


That's why I start with "are you a human or a robot", then "what's your favorite color"


Air Canada has always been like this.

They were notorious amongst the stranded-abroad community during COVID for selling tickets on flights they weren't operating and had no intention to operate, then refusing to refund, except with credits that also expired before they intended to operate.

Scammers from top to bottom.


Canada is a good democracy. As your parliament to split Air Canada into multiple carriers or reduce any state-provided advantages to allow better competitors to emerge.


> Canada is a good democracy

Didn't they freeze hundreds of people's bank accounts, with no due process, for peaceful political demonstrations? https://www.bbc.com/news/world-us-canada-60383385

No better than Nigeria: https://www.hrw.org/news/2021/02/11/nigeria-finally-unfreeze...


They're good, not perfect.


Sounds like Qantas, who sold thousands of tickets on flights they knew were never going to operate.


Were they ever practically compensated in the end?


Yes, the government gave airlines a bailout conditional on them issuing refunds.


I don’t know anything about the details of that bailout, but I feel the only reasonable and fair approach would be to (1) require that customers were refunded unconditionally and the (2) be forced to pay statutory damages. Then _if_ Air Canada couldn’t do that without a bailout, the government should have bailed them out while taking equity for the value of the bailout.


> Air Canada, for its part, argued that it could not be held liable for information provided by the bot.

> "In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website," Rivers wrote.

This could be an article on The Onion. Unfortunately, I suspect this won't be the last time companies try to weasel their way out of the consequences of how they use AI in relation to their customers (i.e., us)


Indeed interesting, even in alternative reality if their argument is valid. They still "hired" the chatbot to provide service on their behalf and they are liable for it. Not lawyer, just spark of common sense ?


I don’t think hiring is a good analogy here.

A human at least can bear responsibility. A chatbot cannot. That responsibility has to be absorbed by something. A company should be much more responsible for the actions of programs they run, than people they hire.


if a human makes that mistake the company is still responsible for it. the company can however then turn against that human they hired and sue them. if that court determines that said human does bear responsibility the company will get something back. that has no bearing on what the customer gets from the company however. so for the customer it is irrelevant how the mistake by the company was made.


I love a good spicy legal opinion like this.


I find it hard to understand the calculus on Air Canada's side of fighting this. Not a lot of money and really bad press.


You overestimate AC.

They are buried in compensation claims right now due to them claiming that crew shortages in 2021 and 2022 were out of their control, and the government regulators disagreeing with them.

My guess is that no one with any power bothered to look at this until it was too late to settle, and they thought it was worth the cost of fees to see if they could get out of paying.


Because it probably threatens a whole new customer support system they spent 100x developing / migrating to vs what they spent in lawyers for this case


Out of court settlements have no effect on precedent. We would even be here if the manager this got escalated to at some point had just done their job and made an exception.


So it wouldn’t make other lawsuits easier in civil law? I don’t know the law well here


If they can be held liable for their shitty software replacing a human then it might not actually be cheaper to replace the human with shitty software.


Because they're mostly a monopoly and have been doing whatever they want for decades by now. They've gone bankrupt time and time again bailed out by tax money.


Realistically, all complaints from the flyer went to first level customer service agents which are only told to enforce the policy as is. This probably did not get escalated.


Not a lot of money times many people is a lot of money.


Yo what was air Canada thinking here... 1 week after the flight, and he even provided the death certificate?

How'd anyone let this go to 'court' (I'm not Canadian, it's a tribunal idk what that is) for $600. And I'm guessing it's Canadian so it's more like $400 US. What kind of point were they trying to prove here.

I legitimately think you could talk amazon support into giving you that over a broken product.


> And I’m guessing it’s Canadian so it’s more like $400 US.

FYI, to a Canadian, $600 CAD feels like what $600 USD feels like to an American. Canadian wages aren’t 30% higher in numerical value than US wages.


Granted, I’m an American and I’ve had a couple glasses of wine tonight, but I’ve read this comment like 8 times and it still makes no sense to me.


The currency is worth 30% of a U.S. dollar but the cost of living is also significantly lower, so to a person living in either country $600 feels like about the same amount of money.


The cost of living is often not lower in Canada. Average housing cost is now significantly higher in Canada and for those Canadians with an easy path to move to the US (like SW developer, engineer or doctor), the numbers I am recently seeing are salary of 2-3x in the US, housing cost 0.5 to 0.7x, and other cost of living a bit less.


Where are comparable locations half as expensive in the US? I agree that Canadian salaries trend much lower.


"Comparable locations" is very subjective, but Vancouver vs Denver is one I've heard of that seemed convincing to me. Then again, I've never been to Denver.


Those cities are not remotely comparable. Vancouver and Seattle are a much better comparison. From a cursory search an average home in Vancouver is $CAD 1.1 mil and an average home in Seattle is $USD 800k. So once you take into account exchange rate they are actually fairly similar.


You're right about the comparison between Seattle and Vancouver, but "housing is half the price" is a common comment from Canadians that have made the move (both online and people I know personally). It may be an exaggeration, but the US has 10x the population and more medium-large cities to choose from so I think it's somehow true at least some people can find a significantly cheaper place that is personally comparable and has a job for them. If you're a software engineer that lives in Vancouver (1x salary, 1x housing prices) and loves the mountains and wants to move to another mountain-adjacent place, Denver (2x salary, 0.5x housing prices) is indeed pretty enticing compared to Seattle (2.5x salary, 1x housing prices) or Calgary (1x salary, 0.4x housing prices).[1]

[1] Guess but not complete wild-guess multipliers


I agree. There are lots of places in Canada with 0.3x house prices but once you filter for 1x salary and places which are nice to live in Canada you are left with an empty set.

The ideal situation is you get a remote job in Canada for 1x salary and live in a nice place with 0.3x houses. That is my current setup.


> but the cost of living is also significantly lower,

You can look at random things like groceries, homes or car insurance to see that this isn't really true. 600 USD (or even 600 CAD) goes a heck of a lot further in most of the USA.


it's a tribunal idk what that is

The Civil Resolution Tribunal is better known as "online small claims court". It's something BC introduced a few years ago to streamline the process.


Yeah the real story seems to be garbage customer service: by the AI and humans at Air Canada. Perhaps the bot is implemented perfectly to match the humans it replaced.


The civil resolution tribunal is kind of a hybrid small claims court and arbitrator. It was originally started about a decade ago with limited scope over condo/HOA disputes and small claims, but has been expanded.

They have legal authority, but you can always appeal to the provincial courts (which almost never works out, I think they agree with CRT decisions in 95% of cases)

Air Canada baffles me. Their front line employees are powerless and frequently hostile. But I have never submitted a complaint to corporate without being given at least $200 CAD worth of flight credit. Most recently I was yelled at and hung up on by a customer service agent, I got a coupon for 20% off any itinerary with up to 4 passengers. I’m not even a member of their rewards program!


Same here. Came across something that was broken on a plane that wasn’t serious in any way and I wasn’t even mad about. But I don’t bother going through “feedback” or “comment” because I figure they never action on those.

So I submit my “complaint”.

A few months later, I get a response clearly showing that they didn’t read what I wrote, and almost certainly didn’t put in any plan to fix it; but they gave me a $400 credit.

I wasn’t even angry in my “complaint”. Maybe I need to be nicer in my actual complaints in general in the future.

Thought it was a scam when I got the response but I’ll take it.



For posterity, it might be nice to configure the past week as a specific date range btw


We are just getting started with the age of LLM mishaps, and it's just going to get more ridiculous.

I was talking with an ML engineer that told me they had a lot of success fine tuning a LLM on their internal docs. A chatbot could solve about 70-80% of questions without the need of human intervention.

However their next big idea was to fine tune the LLM with the company financial data so that the finance department could get the information they need without custom queries or tech skills. We are just a few steps away from LLMs feeding hallucinations to decision makers and then those acting on bogus data.


That airline policy is the kind of micro-optimization that no individual human would do, but distributed decision-making of large organizations effect all the time. There's profit to be found by targeting people in distress—like someone who just had a person close to them die—and abusing their distraction to confuse them into paying more money then they need to. Allowing a grace period to claim this "bereavement discount" is a human(e) policy. Making the client claim the discount immediately, or else lose it, is monstrous.

Today, the robot showed more humanity than Air Canada's human leadership. That was an accident; the future could be the opposite of that. You could program machines to be "better" employees than humans, more aligned with organizational goals like "maximize profit at absolutely any cost", or "win wars at absolutely any cost", or "win elections at absolutely any cost". We humans aren't completely aligned with our teams; we have moral scruples that limit us—we can't achieve the 100% "absolutely any cost" part. I think that might suddenly change, and we might find ourselves drowning in an unexpectedly in-human world.

(This was inspired by, I forgot who wrote it (?), a writer's observation about the evils of war being easier when morality is distributed. The commander who orders the atrocity, doesn't do it; the solider who commits the atrocity, has no agency in his actions. Both feel reduced culpability, and can go farther in effecting their goals than an individual acting alone).


Air Canada - We're not happy until you're not happy.


I wonder whether the bot hallucinated the wrong information or whether the policy changed and the bot simply wasn't updated / retrained. The latter seems more likely but less interesting, akin to information on a boring HTML page getting overlooked during a site update.


The incident was in 2021, so I don’t think it was an LLM.


No but it would be nice if the same laws would apply to LLMs. Too often they're now deployed as a quick fix for a chatbot.

But before either they quoted me a solution or escalated to support.

Now it makes up a non-working solution.


Honestly would it make any difference if the information was just on an FAQ page and it contradicted what the actual ticket contract said?

I’m with you. They should be held to the information they give out. Short of an employee purposely maliciously giving out bad information it seems like not making stuff up should be a basic requirement for them to operate.


If a vendor communicates multiple different prices for the same thing in different places, such as different areas of their website, or their website versus an e-mail flyer, or any pieces of paper from the vendor, they must give you the lowest price among all of them and not make excuses like that you should check the other communications and understand that the price is one of the higher ones. This is just common sense.


Where's the chatbot accessible from? Can't find it. I assume it was some old-school KB query chatbot not an LLM? Date says Nov 2022 and LLMs hadn't become quite as popular at that time yet. They have to obviously be responsible. What a nonsensical claim.


What difference does it make? If it happened today with an LLM the outcome should be the same.


> What difference does it make?

In liability, none, but it'd at least be more understandable if it was an LLM, rather than something that should have been hard-coded with the right answers.


I'm not sure I follow. I wrote an "AI" chatbot in highschool and it certainly didn't reproduce hard-coded "right answers".

LLMs don't somehow invalidate the work of their predecessors. Chat bots aren't new.

I'm not really sure why you brought up LLMs at all. Are chat bots synonymous with LLMs now? I sure hope not because then this sort of scenario only gets worse.


> Are chat bots synonymous with LLMs now? I sure hope not because then this sort of scenario only gets worse.

You're right. LLMs are now the de facto standard implementation for Support Chatbots. Almost every chatbot platform offers a AI Chatbot product in some form.

- they're also frequently shown to be prone to hallucinations - and also shown to be tricked - and can be gamed into breaking it's prompt cage

https://twitter.com/ChrisJBakke/status/1736533308849443121

This case therefore sets a precedent for these scenarios, with or without a disclaimer that you should confirm this information with the dealership. If you assume the liability for the accuracy of "ye old bot" responses, then it raises the possibility that you assume the liability for the accuracy of "ye new bot" responses.

My opinion is that once the AI wild west phase has ended, and the legal reckoning is upon it, everyone will learn that using AI does not absolve one of liability. This would essentially kill the dream of full self-driving automation, among other things.


I don't think that's true. I do think it is a bit of a slippery slope.

Replicable, intelligent but fallible and disposable minds have incredible potential to positively impact our society. But somewhere there is an ethical and moral boundary to be crossed.

It's the journey, not the destination.


> I'm not really sure why you brought up LLMs at all

I didn't

> I wrote an "AI" chatbot in highschool and it certainly didn't reproduce hard-coded "right answers"

Sounds like it would have been a poor choice for a customer-service bot then?


My apologies, I misread.

And it would certainly have been a poor choice for customer service, but I have definitely used chat bots that are far worse than that one was.


The header "Company claimed its chatbot ‘was responsible for its own actions’ when giving wrong information about bereavement fare" is such a great example of an article giving misleading information to up sensationalism. Later on it's explained "Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions."

That's a completely different argument and much less alarming.


> That's a completely different argument and much less alarming.

Actually, it's just as alarming. The entity behind the chatbot might be a separate legal entity, but that doesn't absolve the airline, who outsourced a function bound to their terms and conditions.

If they literally tried to absolve blame by assigning personhood and liability to the bot, that's insanely bad.


Those 2 sentences are identical? Unless they meant the company that made the chatbot was a separate legal entity. If the chatbot itself is a "separate legal entity" then that's basically saying the same thing.


I am Canadian. Air Canada has the worst customer service and treatment by far. This is hardly surprising, they try to weasel out of any responsibility all the time.

This is the same company that just got slapped for making a disabled dude crawl out of the airplane when air cans special chair didn't show up. I shit you not.


What are the chances that companies take the easy way out and slap a EULA on the chatbots like "by engaging in conversation with our automated agent you agree to <link to 100 page doc absolving them from anything the bot ever says or does they don't like>"?


Nothing preventing them from doing it. But then customers will very quickly learn to not use the chat bots and instead call in or use other means.

Would you use a company tool that even the company doesn't have faith in?

And if companies remove human reps from the equation and only offer chatbots, then I'm sure eventually the regulatory agencies will step in.


A rational actor, given a selection of alternatives, would not. Being stuck on hold when calling them is a frequent complaint though. And if there was real competition, stuff like this would've killed them long ago:

https://www.thestar.com/news/canada/air-canada-apologizes-af...

https://www.cbc.ca/news/canada/british-columbia/cta-fine-air...

https://www.theglobeandmail.com/news/national/dog-that-escap...


One thing I love about German law is that lopsided contracts are generally not valid.


I don’t think a “don’t trust anything our chatbot says” contract is very lopsided. You can just not use it. It’s not as if it’s a highly important facet of your life.


Wrong thread, this one explicitly postulates a 100+ page EULA


I think the length of such agreements works against them


100%


Then they'll make it hard to get the information in any other way as well. Three humans on the phone lines and most of website content gone as they lay off the staff.


The worst case for these airlines is always just a refund for whatever money they took in the first place.

Someone could make a killing making a fake airline, taking payments for flights, and then cancelling every single flight and only refunding the people who fight back hard enough.


> Someone could make a killing making a fake airline, taking payments for flights, and then cancelling every single flight and only refunding the people who fight back hard enough.

They did, it's called Air Canada. I was scammed out of Japan tickets by them during COVID, they have still not refunded me and they expired the credit for the tickets. I have tried every customer support avenue to no avail. It's a scam airline.


Canada introduced new regulations in response to cases like yours [0].

It required some persistence but Air Canada did abide by the regulations in my case last year. My flight from Vancouver to Toronto was delayed by a couple hours and I missed the connection to Edinburgh. They, as required, booked replacement flights via Air France (not a partner airline) as the next Air Canada or partner airline route would have departed more than 9 hours late.

[0] - https://rppa-appr.ca/eng/right/flight-delays-and-cancellatio...


Apologies for the off-topic reply here, but they've just fixed the ipv6 issues with gtlib by removing the AAAA records.


Thanks, I thought I would die of old age before that got resolved.


> Someone could make a killing making a fake airline, taking payments for flights...

Given how (relatively) significantly-regulated airlines are in the US, I doubt that this would work for long, and I expect that the FAA would make sure that the company lost a substantial amount of money for trying that shit.


I wonder if companies will just show a disclaimer first to avoid responsibility.

"This chatbot is provided for entertainment purposes only. If you trust anything it says, you only have yourself to blame. Reading this message waives your right to sue us."


The penalty for arguing they are not responsible for the “magic chatbot” telling lies to customers should be much more severe.


It would've been if it were a lawsuit, but this was a tribunal hearing and recompense is limited to damages, that is, the amount the claimant was out of pocket.

I'd love to stick it to Air Canada too, but Canada is (hopefully) less litigious than the US.


You should have a little deterrent for “remarkable” cases like this. Say, double damages capped at $300.


It's a start, but there should be some kind of general Algorithmic Integrity law.

Hallucinating chatbots, automated YT copyright strikes, Insta accusations of bothood because you clicked too fast, Amazon nuking author accounts because it gets confused, self-driving cars that don't self-drive - and so on. They're all the same problem.

At best these are large corporations automating processes with unacceptably poor reliability. At worst, they're hiding deceptive and manipulative practices behind algorithms so they can claim they're not responsible for damage caused.


I don't see why AI is particularly germane to this case- if a human support rep from a third party contractor made the same statement I assume AC would have tried the same nonsense?


They're not liable for the things that come out of their employees' mouths?

Maybe my flight tickets aren't valid either because I foolishly purchased them on aircanada.com and their website is known to have bugs? Or the ticket lady punched the wrong thing into her computer? Or I should have known the plane was going to be overbooked?


My read from the article was that because the chatbot is a third party company, they were liable for the error. Which I assume is the same argument they would have made if chat support was outsourced to some outside firm. Of course, either way it is an idiotic argument.


If I ran their large multi-page website I could search over it for every occurence of "bereavement fare" to find out exactly what we are telling people and check it for accuracy. I can't do that with a LLM afaik. I can't ask it "tell me every thing you can say about "bereavement fare". LLM's are not inspectable like that. LLMs are tuned and output filtered, but can this catch all errors?


Wow, going through all of that for $650. I guess it’s better to send a message.

Cost of Air Canada lawyers well exceed the judgement here. Why would they bother fighting this?


i'd assume they're just following a general policy of having somebody in their legal department respond with a rebuttal for every single claim that comes in. on balance, it probably saves them money over having somebody with decision-making power make a decision on whether or not to fight things on a claim-by-claim basis.


What actually happened? It said they could the refund window was 90 days, but why didn't it work?

"But when he applied for a refund, Air Canada said bereavement rates did not apply to completed travel and pointed to the bereavement section of the company’s website."

Does this mean they actually took flight with no issues, but then requested a refund afterwards because it was for a funeral?


I also found it very confusingly written, and as a non-native speaker, I initially took bereavement to refer to a flight delay (imagine my confusion about the claim "you can't apply for this refund on completed travel").

Then looked up the word and my initial association with something negative was correct: it's about death. The only explanation that I think fits all the article's statements is this:

0. AirCan offers a discounted rate if you fly to someone's death, probably because of the country's size (it's a weird concept to me as a Dutch person!)

1. Person asked the bot how to get this discounted rate, what papers AirCan needed to see

2. Bot said (not shown in article, the only relevant-looking link goes to some stupid news category page): you get your discount afterwards, not before. This seems to be phrased in the article as "refund", but I take it to mean "partial refund to the amount of the discount"

3. Person flies, then applies

4. AirCan now says: you can't apply for this anymore after the travel was completed


With execs pounding their fist on the table saying “Go get me some of that GenAI stuff!” expect to see a lot more really poorly implemented “AI assistants” and other half-baked AI projects causing blunders for businesses. Eventually the dust will settle and folks will find more attenuated ways to get value out of this tech without creating a big mess. In the meantime get some popcorn ready for the AI-fueled comedy of errors that’s about to play out across many companies.


It's those same execs who GenAI could replace and do a better job than.



Oh, wow! There was a twitter thread that made the rounds a while back about a guy who jokingly got a car dealership's chatbot to agree to sell him a car for basically free...Living in BC and reading this article has given me a great idea...!


I notice almost all the comments here assume this is an AI chatbot, but the original incident happened in November 2022, which is before ChatGPT was released, so… were there any LM based chatbots then?


Good


I don't use chatbots. I don't like their non determinism, and the little I've tried, it never had the answer for what I was looking for. I much prefer browsing the website


So a lawsuit vector exists against AI-enabled lack of care for customers.

Would be interesting to see if a cottage industry can open up around prompting inaccurate information from company AI info to reap via lawsuits.


That seems like a stretch based on this specific case, given that the only award was explicit damages (ie, a refund of funds already spent). No one ended up “ahead” here.


This would be good, actually, right? Better to have the chatbot misfire on somebody not actually depending on it for correct info.


You’d have to do so without appearing to intend to do so, and in either case would be fraudulent.


I don't see how it's fraud or would need to have a disguised intent.


If your intent is to get a wrong answer and you succeed then nice job. Do you really deserve a reward?


Interesting that this is just straightforward error. It's not an LLM or similar so whatever KB it was drawing from is just broken.


What happened to the GMC dealer whose GPT chatbot was promising customers free trucks while claiming the contract was legally binding?


I'm happy we have this precedent now in Canadian law. I'm wondering how such a thing would play out in the US.


companies with AI chatbots probably:

1. Replace customer service agents with shitty LLM

2. Distance yourself with shitty service by shitty LLM

3. Profit.


who was the vendor that supplied the chatbot?


RAG Agents are gonna cost the companies that run them a lot of money, in more ways than one


The article doesn't state what kind of chat bot this is. Is it an LLM, or a pre-LLM bot programmed to act on certain keywords?

If it's the latter, I'd assume that Air Canada would be able to go in and check why the bot would give a wrong answer, most likely outdated policy information, or a misreading from whomever entered the answer to that prompt.

However, if the bot is based on an LLM, then what's the point? It's apparently worse than an old school bot, in that it cannot be trusted to give correct answers, it's just better at understand queries.

There was a quote in the article "I'm an Old Far and AI Makes Me Sad":

  “If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” says AI scientist Sam Bowman. “And we just have no idea what any of it means.”
If that's true, then you can not use these systems for anything where you may need hold some one responsible for the output.


that this is how it should be is obvious to everybody except the management. what i'm wondering is how do you prove such misconduct as a customer?


I bet it would have gone the other way here in the USA.


ouch I cant believe they really tried those arguments

the chatbot as a separate legal entity? do they mean like a contractor’s service, or did they mean like a distinct AI creature


Either way, if their argument were upheld, it would be insanely interesting to see the second-order effects of that.

Like, could you spawn up a local LLM, have it take out some loans and transfer you the funds, and then "kill it" (^C), so the loan liability dies with the LLM?


Well, it's hard to see who else could be?


Air Canada is going to get a lot of publicity over an $812 judgement in a provincial small claims court.


It just did


What the heck is a bereavement fare and how do you get them? Do most airlines offer that?

Edit: Whoa, apparently several do. In my 30 years of flying, I never knew this! https://travel.usnews.com/features/bereavement-flights#alask...

If an immediate family member passes away, some airlines will give you a discount.


I don’t believe that the traveler mentioned in TFA did this but using a screenshot as evidence will not be viable in the longer run.

Editing the HTML code of a webpage that is open in the browser is a key step in one of the popular IT support scams that are covered by YouTubers.

But how else to present evidence of chatbot misinformation, I’m not sure.


November 2022.. might have been a pretty weak model or just out of date info.

This to me is a cautionary tale against deploying cheap small LLMs instead of using larger models. I think 7b models are very tempting to many business people who may value profits over just about anything else.


I just don't understand why companies don't use something like Algolia on their KB articles with some careful manual keyword curation.

Imagine all the person had to do was type "bereavement" in a search bar and it instantly matched with the bereavement policy. What more does a person need?


Exactly this. I feel like LLMs are great for so many things and have a crap ton of real-world use cases. But for simple FAQs, there's so many better and cheaper solutions out there.


... Larger models aren't immune from hallucinations, nor is the rate of hallucination negligible enough to ignore with larger models.

This is a fundamental issue with the underlying technology, one that many companies in this space refuse to reckon with.

A lot of "this can be automated with AI!" startups are relying on the basic assumption that hallucinations can be tolerated - cases like this really narrow the field of use cases where that is true.


Hallucination or inaccuracy is dramatically worse with 7b models versus the largest ones.


Just wait till an aggressive lawyer finds out about this. Gonna be a lot of companies taken to the cleaner for hallucinations


There is no evidence they used a small LLM. There is no evidence that LLM size is the issue. Environmentally bad to use a large model when a smaller one could work just as fine.


There’s no evidence it was even an LLM at all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: