This RCA summarizes a reproducible failure pattern in OpenAI’s support pipeline, combining technical system behavior with EU digital‑services law. All findings are based exclusively on observable artifacts, email output, and public EU legislation. Technical Failure Cluster 1.1 Policy‑Container Drift (Mode Switching) Across a single email thread, the support agent’s output exhibited: contradictory account classification (“Enterprise” → “not Enterprise”) abrupt changes in tone, grammar, and template structure loss of previously acknowledged context inconsistent policy phrasing This pattern matches policy‑container drift, where different safety/policy configurations are loaded mid‑interaction. It is inconsistent with human behavior and consistent with automated triage + fallback templates. 1.2 Quoted‑Printable Encoding Leakage Multiple emails contained raw encoding artifacts: =E2=80=99 (apostrophe) =E2=80=93 (dash) These artifacts occur when LLM‑generated text is injected into an email system without proper decoding. This is a machine‑rendering failure, not a human typo. 1.3 Billing‑Retrieval Degradation The support system repeatedly claimed: “Only one successful charge was applied.” This was factually incorrect. The referenced transaction was a partial refund, not the only payment. This indicates: incomplete database retrieval or a fallback to a simplified billing template or misclassification of the ticket as “low‑context billing inquiry” A human reviewer would not misinterpret a refund as the only payment. 1.4 Automated Decision‑Making Without Escalation Despite explicit requests for: senior support privacy/compliance review human escalation the system never escalated. This suggests automated decision‑making without human override, which has legal implications under GDPR Art. 22. 2. Legal Failure Cluster (EU Law) 2.1 Non‑Conformity Under EU Directive 2019/770 EU law requires digital services to be: stable functional reliable as advertised Persistent instability, mode‑shifts, and loss of functionality constitute non‑conformity under Articles 6–8 of Directive (EU) 2019/770. When a digital service is non‑conformant, the consumer has the right to: repair price reduction or refund Internal “non‑refundable” policies cannot override EU law. 2.2 Transparency Requirements (GDPR Articles 12–14) If a company uses automated systems to: classify accounts deny refunds make contractual decisions it must disclose automation and provide meaningful human review. The support system: used a human signature did not disclose automation did not escalate to a human This is a transparency deficit under GDPR. 2.3 Automated Decision‑Making (GDPR Article 22) If a refund denial is made by an automated system, the user has the right to: human intervention explanation contesting the decision No such pathway was provided. 3. Root Cause Summary Primary Root Cause Automated support misclassified the request as a standard billing inquiry, triggering template‑based responses and incorrect billing summaries. Secondary Root Cause Policy‑container drift caused contradictory statements, context loss, and inconsistent reasoning. Tertiary Root Cause Lack of human oversight allowed automated decisions to stand unchallenged, conflicting with EU consumer and data‑protection requirements. 4. Impact incorrect billing information contradictory account classification inability to access human review non‑compliance with EU digital‑services obligations potential GDPR transparency issues erosion of user trust Conclusion The incident reflects a systemic support failure, not a single error. The combination of: automated decision‑making policy‑container drift incorrect billing retrieval lack of human escalation and non‑conformity under EU law creates a reproducible failure mode with both technical and legal implications. submitted by /u/Krieger999
Originally posted by u/Krieger999 on r/ArtificialInteligence
