How I Stay on Top of Everything (The Honest Answer)
A Norwegian business student asked me last week. I finally have an answer I actually believe.
I’ve been giving the same answer to the same question for about two years now. The question is some version of “how do you stay on top of everything?” - how do you know what to read, what to follow, what actually matters in a space moving this fast. It’s one of the questions I get most. Last week, talking to a group of business students visiting Google from Norway, I said it again - and partway through, I realized I didn’t quite believe it anymore.
Here’s what I actually think: doing leads you to the right reading and learning. Reading alone just leads to more reading.
When you’re in the middle of a real project - something that can fail, something where you have to make an actual decision, something with stakes (even if the only stakeholder is you) - you hit walls. And when you hit a wall, you go looking for the specific thing that helps you get past it. That search is completely different from scrolling a feed. You’re not consuming content. You’re solving a problem. The reading that finds you in that state sticks, because you apply it within days and know immediately whether it’s useful.
If all you’re doing is reading, you don’t have that filter. Everything feels equally important and equally distant. The volume is disorienting. And the worst part: you can’t tell the good stuff from the bad stuff until you try to use it - which you never do, because you’re too busy reading the next thing.
The answer to “how do I stay on top of everything” is: stop trying to stay on top of everything. Start something. Hit a wall. Then go find what helps you get over it.
I know this is true because I’m living it right now across countless active side projects. Here’s where some of them stand.
The Yosemite Coloring Book
This one started as a style consistency test disguised as a fun activity book for my kids. I wanted to know whether I could maintain a coherent visual aesthetic across a generative image workflow - same subject matter, same feel, enough pages to hold together as a real product. Honestly, I first tried this years ago when we launched an early version of our image generation model at Google. The results were promising but never quite good enough - I could never get the style to look quite right or hold across a full set of images. So I shelved it. I kept coming back to it every few months, and every time I got a little further before hitting a wall. Then recently I came back to it, and the difference was immediate. That’s no longer the problem. So I made a coloring book. Yosemite. It’s on Amazon. (Fun fact: I got married there. The subject matter was not random.)
What I didn’t anticipate was how much the project would teach me about the limits of a conversational interface. I started working with Lulubot, my OpenClaw agent to generate and iterate on the images, and pretty quickly hit the ceiling of what back-and-forth chat can actually do when you need fine-grained creative control across 50 unique images with accompanying facts. So I built a dedicated web app - not instead of the agent, but alongside it. The agent could control the web app. Everything I did in the web app pushed context back to the agent. Full loop.
That two-interface setup - conversational agent plus purpose-built UI working in tandem - turned into its own thesis I’m still developing. When does a dedicated app make more sense than a chat interface? Where does each one break down? What does the next generation of personalized agentic tools actually look like? I didn’t go looking for those questions. The coloring book found them for me.
Lulubot
Lulubot is my autonomous agent, built on OpenClaw and running on a dedicated Mac Mini. She has her own Gmail, her own GitHub, her own Vercel account - credential-isolated from my own so there’s a clean boundary between her identity and mine. I talk to her through Telegram. Every app she builds gets pushed to Vercel and saved to my phone as a progressive web app. The stack is growing.
The most interesting evolution has been the move to sub-agents. I started with one giant chat thread. A few weeks in, I reset everything and restructured: a main thread with Lulubot, and then separate Telegram group chats for specific sub-agents focused on specific tasks. The difference was immediate. I know which thread to jump into when I need to tackle something. The cognitive overhead dropped significantly.
But that clarity also opened up harder questions. Should the Yosemite coloring book have its own sub-agent? If I make a Yellowstone coloring book next, is that the same agent moving to a new project, or a new agent entirely? How do you transfer skills between agents? How much context should each one carry about me?
These are product problems - not in the abstract, but in the concrete sense that they have answers that depend on real user behavior and real mental models. The only reason I have sharp opinions on them now is because I’ve been living inside the system, not just reading about it.
The Twins’ Third Birthday
I wrote about this last week - I handed the logistics of planning a birthday party for two three-year-olds to an agent, and watched carefully to see where it worked and where I got pulled back in. If you missed it, it’s worth going back to. The short version: the places where a human still needs to show up are not the places you’d expect.
Ron the Three-Legged Cat
Ron is a real cat. He lives in my neighborhood. He has three legs, and he is - I say this without exaggeration - beloved. The library down the street ran a scavenger hunt recently and one of the items on the list was Ron the three-legged cat. That’s the level of local celebrity we’re talking about.
I’m making a children’s storybook about him. It felt like the right thing to do (and my neighbors all agree).
What I did not anticipate was how hard it would be to get a cat with exactly three legs out of any image model. Four legs. A ghost fourth leg. Three in the prompt, four in the image - every time - because every model I’ve tried has a very strong prior about how many legs a cat should have, and my prompt is fighting that prior. No amount of rephrasing has resolved it.
What this tells me connects directly to something the Pomelli team worked through in their quality framework (you can read about it here): there’s a category of problem where you’re not fighting a prompting gap, you’re fighting the model’s world model. Better prompting helps at the margin. It doesn’t fix the underlying issue. Knowing the difference - model limitation versus prompting problem - is half the job of building AI products right now.
Ron deserves three legs. He’s getting them. I’m not done.
Putting It All Together
Each of these projects is teaching me something the day job alone couldn’t. Not because the day job isn’t rigorous - it is - but because side projects give you a different kind of contact with the technology. Lower stakes. Higher freedom. No team to coordinate with, no roadmap to defend. Just you and the wall. And then the interesting thing happens: the lessons don’t stay contained. The coloring book taught me something about agentic UX that I brought straight back to how I think about product interfaces at work. Ron is teaching me something about model limitations that sharpens how I talk about quality with my team. The loop runs both ways - and that’s exactly why I keep the side projects going even when things are busy. Especially when things are busy.
That’s the real answer to the Norwegian student’s question. You don’t stay on top of everything. You stay inside something real, and let the questions that come up point you toward what’s worth reading next.
Speaking of which - I have one more project to announce. But that one’s getting its own post on Monday. Stay tuned…





