When AI Becomes Your Worst Legal Advisor
The CEO of Krafton, a multi-billion dollar gaming company, just learned an expensive lesson: ChatGPT isn't a lawyer, and treating it like one can cost you everything.
According to recent court documents, the CEO used ChatGPT to figure out how to void a $250 million contract with the head of the studio developing Subnautica 2. His actual lawyers told him it was a terrible idea. He ignored them. The court sided with the ousted executive, and Krafton is now facing massive consequences.
This isn't just another "CEO does something stupid" story. It's a masterclass in how not to use AI — and a warning sign for every business racing to deploy AI without understanding its limitations.
The Difference Between AI That Helps and AI That Hurts
ChatGPT is an incredible tool. It can draft emails, brainstorm ideas, summarize documents, and even write code. What it can't do is replace expertise in high-stakes situations where context, judgment, and accountability matter.
The Krafton CEO's mistake wasn't using AI. It was using AI for the wrong job and ignoring the right experts. He treated a general-purpose language model as if it had the contextual understanding of a legal team that knew his company, his contracts, and the potential consequences of his actions.
This is the AI equivalent of asking a smart intern to perform surgery. Sure, they can look up how it's done, but you wouldn't want them holding the scalpel.
Why Customer Service AI Is Different
Here's where the plot thickens: not all AI applications are created equal. The reason customer service is one of the strongest use cases for AI isn't because the technology is smarter here — it's because the problem is better suited to what AI actually does well.
Customer service conversations follow patterns. Customers ask similar questions, report similar issues, and need similar resolutions. An AI trained on thousands of real customer interactions can recognize these patterns and respond appropriately. When it encounters something new or complex, it can escalate to a human.
The key difference? Clear boundaries and defined escalation paths. AI handling customer conversations isn't making irreversible legal decisions. It's answering questions about product features, helping with password resets, and routing complex issues to the right team.
When we built Darwin AI's AI Workforce, we didn't just ask "how can AI solve this?" We asked "what should AI solve, and when should it hand off to humans?" That's the AI-first thinking that actually works — not AI-only thinking that ignores reality.
The Real Cost of AI Without Accountability
The Krafton case reveals something deeper than one CEO's poor judgment. It shows what happens when businesses treat AI as a shortcut around expertise rather than a tool that amplifies it.
Consider the stakes:
- Legal exposure: Krafton now faces potential damages and reputational harm
- Team trust: How do you think Krafton's legal team feels about being ignored?
- Precedent setting: This case will be cited in future disputes about AI-generated advice
- Operational chaos: Losing a key studio head mid-development on a major title
Every one of these consequences was avoidable. The AI didn't fail — the human using it failed to understand when AI was the wrong tool for the job.
What This Means for Your Business
If you're deploying AI in your organization, here's what the Krafton disaster should teach you:
Start with the right problems. AI excels at repetitive, pattern-based tasks with clear success metrics. Customer inquiries? Perfect. Complex contract law with millions at stake? Absolutely not.
Build in human oversight. The best AI systems know their limits. They should flag uncertainty and escalate when needed. A customer service AI that says "let me connect you with a specialist" is infinitely more valuable than one that confidently gives wrong answers.
Train your team on AI limitations. Your employees need to understand what AI can and can't do. The Krafton CEO clearly didn't. Make sure your leadership team does.
Measure what matters. Track not just how often your AI is used, but how often it's right. Monitor escalation rates, customer satisfaction, and resolution quality. If you're not measuring outcomes, you're flying blind.
The Path Forward
The companies that will win with AI aren't the ones that use it for everything. They're the ones that use it for the right things.
At Darwin AI, we've focused obsessively on customer conversations because it's a problem where AI genuinely creates value. We can handle thousands of simultaneous conversations, respond instantly 24/7, and maintain consistency across every interaction. When something requires human judgment, empathy, or complex problem-solving, we route it to your team.
That's not a limitation — it's a feature. Extreme ownership means knowing what we're responsible for and what we're not. We're not trying to replace human expertise. We're trying to free it up for the moments that actually matter.
The Krafton CEO's mistake wasn't believing in AI. It was believing AI could replace the judgment, context, and accountability that only humans can provide. Don't make the same mistake.
The Bottom Line
AI is transforming how businesses operate, but it's not magic. It's a tool that's extraordinarily good at some things and terrible at others. The key to successful AI deployment isn't just asking "what can AI do?" — it's asking "what should AI do, and when should humans take over?"
If your customer service team is drowning in routine inquiries while complex issues go unresolved, that's an AI problem worth solving. If you're thinking about replacing your legal team with ChatGPT, maybe talk to Krafton first.
The future belongs to businesses that understand the difference.
Want to see how AI can actually help your customer service team without replacing human judgment? Let's talk about what an AI Workforce can do for your business.