Designing AI as a Trusted Collaborator in the User Experience
In my last post, I wrote about How AI is Reshaping The Way We Design—from building static flows to creating responsive systems that adapt to real-time user context. That shift has been exciting to explore, but it’s also pushed me to rethink a lot of the fundamentals I took for granted.
Because once you start designing for moments instead of screens that are powered by AI, you run into a bigger question:
How do we figure out what role AI should play in those moments?
This is something I’ve been actively working through in practice—testing, learning, and sometimes stumbling—as we try to bring AI into our products in a way that feels purposeful and integrated, not just novel or bolted on after the fact. A big part of that challenge is about building trust—ensuring users feel confident that the AI not only understands the context, but also knows when to take care of something on their behalf, and when to step back and support or guide them instead.
And one idea that’s been especially helpful is something we’ve started calling agent mapping—a way to hypothesize the role of AI in a user’s journey, not just as a tool, but as a collaborator. It helps us consider not just what the AI does, but why, when, and how it should act to genuinely support the user’s goals.
Designing AI as a trusted advisor
What I’m learning is this: it’s not enough to decide that AI will "automate this" or "suggest that." If we want AI to actually help people, we have to think of it as a partner in the process. A collaborator. Something that understands what the user is trying to do, and offers value in a way that feels helpful and respectful.
But that kind of collaboration doesn’t just happen. It has to be designed.
So the question becomes: What kind of teammate does the user need right now?
Do they need someone to handle the busywork so they can focus? Do they need a second opinion to help make a decision? Do they need quiet support in the background—or clear, confident direction in the foreground?
The answer depends on what the user is trying to accomplish, how complex the task is, and what’s going on around them in that moment.
Mapping the agent’s role starts with the user’s job
So we begin with the user’s core job: What are they trying to get done? What’s slowing them down? What’s at stake if it goes wrong—or what could go better if it goes right?
From there, we ask: Where could an AI agent (or multiple agents) step in as a useful collaborator?
This could look like automating repetitive tasks they shouldn’t have to do manually anymore, surfacing insights they might otherwise miss, or helping them make sense of a complex situation—sifting through data and guiding them toward action.
But the key is that the AI’s role is directly tied to real user needs, not just feature ideas.
It’s rooted in purpose, not novelty.
Why hypothesize? Because we’re still figuring it out
Calling this a "hypothesis" is important—because it keeps us in a learning mindset. We’re not declaring what the AI is; we’re testing what it could be. It gives us permission to experiment, to adjust, and to listen closely to how users respond.
And honestly, that’s been liberating. I’ve found that when we frame the AI’s role as a hypothesis, it opens up better conversations with the team. We’re clearer about why we’re building something, and what outcome we expect from an agent's actions. We’re more focused on solving problems, not just shipping features.
It’s about the relationship, not the output
This is probably the biggest shift in my mindset: I’ve stopped thinking about AI as a set of outputs—and started thinking about it as a relationship.
What does the user need from their AI collaborator in this moment?
Trust? Speed? A sense of control? Reassurance? That context shapes not just what the AI does, but how it shows up—its tone, its timing, even whether it should speak up at all.
When you design from that angle, the AI starts to feel less like a machine and more like a real part of the experience—one that adapts alongside the user, moment to moment.
It’s still a work in progress
This is ongoing work. I don’t have all the answers ... yet.
But this approach—starting with the user’s job and what they need in any given moment, forming a hypothesis about AI’s role, and treating the agent as a collaborator—has already helped my team design more intentional, more grounded AI experiences.
It’s not about doing everything with AI. It’s about doing the right things—at the right time—in a way that feels like a natural extension of what the user is already trying to do.
And when it works, it’s powerful: AI stops being just another feature.
It becomes part of how people get things done—faster, smarter, and with a little less friction.
And those are the kind of collaborative experiences I’m striving to create—experiences that don’t replace the user, but elevate them into something superhuman: inspiring creativity, making them better informed, helping them focus on what matters most in their role, and amplifying their potential beyond what they ever thought possible.