Designing for Presence: The Promise and Risk of AI
With OpenAI’s acquisition of Jony Ive’s startup, io, the stage is set for something more than a new device. What’s being imagined is not just another gadget, but a companion—a pocket-sized, largely screen-free presence that comes with you everywhere you go.
It listens. It learns. It anticipates. It becomes, in a very real sense, part of you.
There is awe in that idea.
A technology that is invisible when you don’t need it and helpful the moment you do. A device that frees us from our screens and gives us back our attention.
But alongside the awe, there is a responsibility to pause and ask: Is this the direction we truly want to take? Is this what we mean when we say we want technology to move humanity forward?
Lessons from Glass
History warns us to tread carefully. A decade ago, Google Glass arrived with the promise of seamless, wearable computing. Instead, it met with rejection.
Not because it didn’t work, but because it didn’t belong.
It unsettled, it intruded, and it earned the infamous nickname “Glassholes.” The failure was not just in the hardware—it was in the lack of empathy for the human experience it disrupted.
That lesson echoes today.
Technology untethered from real human need risks becoming spectacle instead of progress. The difference between something that empowers and something that alienates lies not in its features, but in whether it truly serves us.
A double-edged sword of presence
In the right context, the promise is undeniable.
In a healthcare setting, imagine an AI that listens quietly during the exchange between doctor and patient. It could surface relevant research, highlight risks, suggest next steps—all in real time, supporting decisions that shape lives.
In this context, such a companion could be transformative. It could expand human capacity, sharpen judgment, and make the impossible manageable.
Here, AI doesn’t replace us; it stands beside us.
It helps us see more clearly, decide more wisely, and act more compassionately.
But outside the exam room, the same presence begins to feel intrusive.
If that same device follows me into dinner with friends, or lingers while I sit alone with my own thoughts, it becomes something else entirely.
It begins to crowd out silence, erode reflection, and chip away at the fragile spaces where human thought is born.
If AI is always suggesting, nudging, or finishing our sentences, then what happens to the essential work of being human?
What happens to our ability to struggle with uncertainty, to wrestle with meaning, to find our own voice in the quiet?
The power of always-on AI depends entirely on context: whether it knows when to stand with us, and when to step aside.
If AI is thinking for us, who do we become?
Design guardrails
The answer lies not in whether we can build such a device—we clearly can—but in how we choose to build it, and why.
This is the moment to set the guardrails.
To decide, together, the principles that will guide these new companions into our lives.
Technology must solve real problems. It must be anchored in empathy, designed for trust, and aligned with human flourishing. Not built for spectacle, not unleashed simply because it can be. The makers of our futures carry a profound responsibility: to create technologies that amplify our humanity rather than diminish it.
The choice before us
The arrival of an AI that comes with us everywhere forces a deeper reckoning. It is not just about design or engineering—it is about who we want to be as a species.
What if the true test of innovation is not how much AI can do, but how gracefully it knows when to disappear?
This is the choice before us.
And the time to make it is now.