In February 2024, the Civil Resolution Tribunal of British Columbia (CRT) held that the company bears responsibility for inaccurate information provided by its web chatbot. The ruling has become a touchstone for consumer‑facing AI practices.
Benefit for the reader. The case clarifies how human responsibility (مَسْؤُولِيَّةٌ بَشَرِيَّةٌ) and transparency (شَفَافِيَّةٌ) are understood in consumer services and translates the findings into checklists and process requirements.
Short takeaways
- Information from the chatbot is treated as official website information.
- Accountability cannot be shifted to an “algorithm”; an accountable owner should be assigned.
- Limitations and policies should be published, with a single, consistent version across channels.
- Dialogue logs, escalation paths, and error‑correction procedures should be maintained.
Background: who, what, where, when
Organization/domain. Air Canada; consumer online services (website and chatbot).
Timeline.
- 11 Nov 2022. The customer explores the “bereavement” fare and receives price guidance from a call‑center agent.
- 17 Nov 2022. The customer relies on the web chatbot’s answer stating a retroactive discount could be requested within 90 days; a ticket is purchased and a partial refund is requested.
- Dec 2022–Feb 2023. Correspondence with support; Air Canada acknowledges the bot’s answer was “misleading,” yet the refund is denied.
- 14 Feb 2024. CRT ruling: negligent misrepresentation established; compensation awarded (details in “Results”).
Source material. CRT decision text and legal/media summaries; fare policy published on the company website.
Task and success criteria
Company task. Provide accurate, consistent information about the fare and a clear refund procedure, unified across channels (site, bot, agents).
Success criteria.
- No contradictions between channels.
- The user receives the same answer and a predictable outcome.
- Publicly visible limits, timeframes, and conditions.
- An accountable owner and a defined appeal path.
Assessment under the Code: alignments, gaps, corrective actions
Code norm/principle | Observation in the case | Status | Corrective action |
Transparency; prohibition of gharar (ambiguity); logs should be kept | The bot contradicted the website page; no unifying logic | Gap | All conversations should be logged; a single source of truth should serve the bot, site, and human agents |
Human responsibility (مَسْؤُولِيَّةٌ بَشَرِيَّةٌ) | Attempt to blame an “independent” bot | Gap | An accountable owner for bot content and consumer communications should be assigned; final decisions should be left to a human |
User right to appeal | Request denied despite reasonable reliance on an official channel | Gap | A contact point for appeals should be defined, with time limits and procedure |
Harm prevention and protection of property | Full‑price purchase under expectation of a discount | Gap | Channel limitations and “liability boundaries” should be published; good‑faith compensation should apply when the system errs |
Compliance with local law | Handling under consumer‑protection law | Partial alignment | Logs and evidence of good‑faith effort should be kept; procedures should be updated according to local law |
Solution: processes, architecture, control, audit
Processes.
- Policy ownership: it is necessary that a single policy and rule model exists; the source of truth is stored in a knowledge base read by the site, the bot, and agents.
- Change management: updates should pass a unified release pipeline with checklists, tests, and notifications.
- Errors/appeals: logs and tickets should be kept; SLAs for responses and compensation should be defined.
Architecture.
- Separation of responsibilities: content/procedure lives in a governed repository; the AI layer is isolated and retrieves only approved texts instead of inventing “new rules.”
- Hallucination control: retrieval with enforced citation from the source of truth; answer filters.
- Limitation banner: a visible block “What the bot does/does not do,” with links to policy sections.
Control and audit.
- Roles: a product owner and a content owner should be assigned.
- Audit: periodic conformance checks, continuous A/B probes of answers, external audit of critical scenarios.
- Metrics: cross‑channel contradictions, share of appeals, time‑to‑fix, compensation amounts.
Results: metrics, lessons, improvements
Outcome. The Tribunal awarded CAD 650.88 as partial compensation, plus interest to the decision date and the CRT fee.
Lessons.
- A chatbot is not a separate legal “person”; all channels form one system.
- Consumer scenarios require a single source of truth and visible limitations.
- A user’s reasonable reliance on an official channel should be respected.
What to improve.
- Shorten the error‑correction cycle (responses, hotfixes, knowledge‑base updates).
- Expand monitoring: triggers for potentially “costly” promises and cross‑channel inconsistencies.
- “Honesty modes” by context: for sales — strict quotation; for general info — careful generation with source links.
Check “outside the Code”
- Consumer‑protection law/unfair practice: prohibition on misleading statements; liability of the channel owner.
- Channel conflicts: official policy has priority, yet the user may reasonably rely on an official chatbot as part of the site.
- Data protection: logs and appeals should be stored in line with local law.
Checklist “Replicate in your organization”
- An accountable owner for content and AI channels should be assigned.
- A contact point for appeals should be defined.
- Dialogue logs, policy versions, and publications should be kept.
- Bot limitations and scope should be published.
- Final decisions should be left to a human (compensation/exceptions).
- The bot should access only an approved knowledge base.
- A cross‑channel change pipeline with tests and notifications should be in place.
- Track metrics: contradictions, appeals, time‑to‑fix, compensation.
- Conduct regular external audits for critical scenarios.
- Train staff for appeals and “correcting the bot’s promises.”
Suggestions for updates (product/process)
- “Verify on site” module in the bot UI: each fact is paired with a link to the matching policy section.
- Risk flags in answer moderation: money/discount/time promises require mandatory human review.
- Automatic recall: when a policy is corrected, the bot re‑notifies users affected by previous answers.
Disclaimer. This material is for education and quality improvement; it is not a fatwa (فَتْوَى), an audit, or legal advice.