That Ducking Keyboard Is My Fault — and What It Taught Me About UX

If you've ever tried to type a certain word and your phone changed it to "ducking," I'm sorry. That's on me.
As part of the original iPhone team at Apple, I worked on the keyboard, autocorrect, and the fundamental interfaces that defined how we interact with our phones. For over fifteen years, people have been yelling at me through their screens every time autocorrect betrays them at precisely the wrong moment. That profanity-to-waterfowl transformation? Yeah, I had a hand in that.
But here's the thing: that "ducking" keyboard taught me something crucial about designing intelligent systems that I'm applying today at Infactory. The lessons from those early days of mobile UX are more relevant than ever as we navigate the age of AI.
Designing for Human Error, Not Just Machine Logic
When we designed autocorrect, we weren't trying to build the smartest system possible. We were trying to build the most helpful one. The challenge wasn't technical elegance—it was understanding how humans actually behave when they're typing with their thumbs on a 3.5-inch screen.
We knew people would make mistakes. Fat fingers, rushed typing, thumb-gymnastics while walking—we designed for all of it. The goal was "friendly fallibility": creating an interface that supports you when you're wrong, not just rewards you when you're right.
We didn't want the phone to be "smart" in some abstract sense. We wanted it to be helpful in very specific, human ways.
But here's where it gets interesting: the technical implementation had to be invisible to feel natural. Behind that seemingly simple keyboard was sophisticated predictive modeling, contextual awareness, and real-time adaptation. The more complex the system became, the more important it was that users never felt like they were fighting with a computer.
The Tension Between Intelligence and Intuition
Fast-forward to today's AI landscape, and I see the same fundamental tension playing out on a much larger scale. AI systems prioritize raw capability over clarity. LLMs, copilots, and AI agents often suffer from the same core UX problems we grappled with on the iPhone:
- Overconfidence in the output without accountability
- Black-box decision-making that users can't interrogate
- No graceful way to say "I don't know" while remaining useful
- Hallucinations presented with the same confidence as facts
The more intelligent a system becomes, the more critical it is that users understand what it's doing and why. Intelligence without transparency isn't helpful—it's just unpredictable.
What We Got Right, and What We Didn't
Looking back at the iPhone keyboard, we nailed some fundamentals:
Fast feedback loops: Touch something, get an immediate response. No waiting, no ambiguity about whether the system registered your input.
Context awareness: The keyboard learned from your patterns and adapted. It got better at predicting what you wanted to say.
Forgiveness: The system assumed you meant something reasonable, even when your input was messy.
But we also missed some crucial elements:
No transparency: Users had no insight into why a word was changed. The correction just... happened.
Limited control: No easy way to say "never change this word again" or "I really did mean what I typed."
Poor failure modes: When prediction failed, it often failed spectacularly and at the worst possible moments.
These gaps taught me that even well-intentioned intelligence can feel adversarial if users don't understand or control it.
From the Keyboard to Infactory — Designing for Trust in AI
At Infactory, I'm working on a different kind of interface—not a keyboard, but an AI platform that helps enterprises turn their data into reliable, queryable assets. Yet the same core principles apply:
Trust the user's intent: When someone asks a question of their data, assume they have a good reason for asking it that way.
Make the system explainable: Every answer should be accompanied by provenance. Users need to trace results back to sources.
Don't hide the logic: Black boxes breed mistrust, especially when business decisions depend on the output.
Let the system acknowledge uncertainty: An AI that can say "I'm not confident about this" is infinitely more trustworthy than one that presents every response with false certainty.
This philosophy directly shapes Infactory's Unique Query Methodology™. Instead of probabilistic outputs that vary each time you ask the same question, UQM provides deterministic, repeatable results. Ask the same question twice, get the same answer twice. Every response includes data lineage, so users can verify and understand the reasoning.
When you're dealing with enterprise data—such as financial models, compliance reports, and strategic intelligence—"getting it wrong" isn't just frustrating. It can be legally risky or financially catastrophic. The UX of trust matters more than the UX of convenience.
Designing Systems That Earn Trust
Today's AI systems are exponentially more sophisticated than that first iPhone keyboard, but they're often just as "ducky" in their unpredictability. The goal still isn't perfection—it's clarity, confidence, and control.
The next wave of AI interfaces must respect the human on the other side. Whether you're querying enterprise data, generating content, or building intelligent agents, users need to feel like partners in the interaction, not victims of algorithmic whim.
At Infactory, we're building AI that shows its work. When our system processes your data and generates insights, you can trace every step from raw input to final answer. You control the queries, understand the sources, and trust the results because the system earns that trust through transparency.
That autocorrect keyboard taught me something fundamental: you can't build trustworthy AI if users feel like the punchline. The most sophisticated technology in the world is worthless if people can't rely on it when it matters most.
The future belongs to AI systems that are not just intelligent, but also intelligible. And after years of people cursing at their phones because of decisions I helped make, I'm determined to get that balance right this time.
Ken Kocienda is co-founder of Infactory, where he's building trustworthy AI platforms for enterprise data. He previously worked at Apple for over 15 years, contributing to the original iPhone, iPad, and Safari. He's the author of "Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs."