Google Nest 2024
My Role Creative Director Lead Designer
Scope Vision Concepts
Deliverables Socialization Deck Prototypes Hero Reel
Status Unknown Possibly in production at Google

What if your OS already knew what you needed?

Design concepts exploring how Gemini could be woven natively into a Google OS — not as an assistant you summon, but as an ambient intelligence that surfaces at exactly the right moment.

There was no template for this. We had to invent the mental model before we could design the features.

The brief was a provocation: what does it look like when Gemini stops being a feature you open and starts being the intelligence layer of the entire OS? Not a chatbot in a window — an ambient presence that surfaces at the right moment without being asked. There were no established patterns. Every interaction decision required building the design logic from scratch — thinking through each idea against how people actually behave, not how we hoped they would.

The central constraint was restraint. An AI that's everywhere risks feeling like surveillance. The work had to find the line between genuinely useful anticipation and unwanted intrusion — and hold it across every concept. Getting that line right felt less like a design problem and more like an ethics problem.

Google AI OS concept UI
Concept 01 — Launcher as contextual AI gateway
Concept 01

The launcher as a contextual AI gateway.

The interesting question wasn't whether Gemini could live in the launcher — it was whether it should feel like a search bar, a conversation, or something neither. We explored how a conversational entry point could appear inline without interrupting your current context, evolving the launcher from a navigation utility into an intelligent workflow accelerator while still feeling like a natural extension of the OS rather than a feature bolted on.

Concept 02 — Lock screen as a proactive briefing
Concept 02

The lock screen as a proactive briefing.

One of the most forward-looking concepts in the sprint: what if the first thing you see when you pick up your device wasn't a clock, but a curated summary of what matters right now? A rescheduled meeting, a price drop on something you've been watching, a message that needs a reply.

We designed a lock screen that evolves from a passive timestamp into a proactive daily briefing — surfacing contextual information from across your apps and calendar without requiring you to open anything.

Concept 03 — Proactive Work Spaces
Concept 03

Proactive Work Spaces

Today's desktop leaves the work of organization largely to the user — folders, windows, and apps remain deliberately separate, leaving people to manually bridge the gaps between their own information. Proactive Spaces reimagines the OS as a contextual layer that indexes across applications, files, messages, and platforms to surface everything relevant to a moment in your life — without requiring you to organize it first.

Planning a trip, managing a project, navigating a life event: the OS learns what belongs together and builds that space for you. This becomes especially meaningful for neurodiverse users, where the cognitive load of organization is often the barrier — not the work itself.

Concept 04

System intelligence made ambient.

The final concept explores Gemini as a persistent system-layer presence — a floating widget that surfaces contextual device intelligence without requiring the user to open a settings panel or run a diagnostic. Battery optimization suggestions, connectivity alerts, storage recommendations — delivered as a gentle, well-timed nudge rather than a disruptive notification. The hardware itself becomes a surface for ambient intelligence, not just a container for apps.

Designing for a capability that's still becoming itself.

This work was developed in early 2024, when agentic AI was still largely experimental. We shipped concepts we weren't fully certain about, learned from what felt right and what felt wrong, and revised. The patterns we arrived at weren't the ones we started with. That's the only honest way to design at the frontier: ship, observe, and correct.

The constraint I kept running into was the pull toward familiar UI patterns — launchers, lock screens, notification surfaces. The concepts needed to land quickly with stakeholders, which meant anchoring to mental models people already had. But the more interesting question, the one I didn't get to fully explore, was what happens when you stop mapping AI onto existing surfaces entirely. What does it look like when the thinking itself becomes the interface? When reasoning is visible, interactive, immersive — not a result delivered, but a process you're in together with the system? That's still an open problem. And I think it's one of the most important ones we haven't solved yet.