One can’t help but feel that the AI revolution is upon us, and its evolution is happening at breakneck speed. Recently, I was listening to a podcast with Rahul Vohra, CEO of Superhuman - the innovative email application, and I enjoyed the way he succinctly outlined his framework for thinking about implementing AI into their product. Specifically, he outlined three distinct stages of AI integration, each with increasing levels of autonomy and impact. I wanted to share it here because I thought it was a good way of framing feature development to which I could relate, but also because it sparked my imagination about where the last stage could take us…
Stage 1: Opt-In Features
This initial stage focuses on providing users with AI-powered tools that they can choose to utilize. Think of features like “rewrite with AI” when you're composing an email, or an option to “summarize a message” when you get a really long email. These tools enhance the user experience without fundamentally changing the core functionality of the product.
The benefits of this approach are threefold. First, it allows companies to introduce AI features at a lower cost since they are not constantly running in the background. Making a call to a model costs money every time you do it; if you aren’t sure how well a feature works, you probably want to minimize how expensive it is to build at first while you're testing it out. Second, it gives users a sense of control and allows them to gradually adapt to the presence of AI in their workflows. Third, it doesn’t change core user behaviour, so it’s a “safer” way to try out new ideas.
While the cautious approach of opt-in AI offers several benefits, it can also backfire. If you believe an AI-powered feature has the potential to significantly improve a workflow, burying it within an opt-in menu might not be the best strategy. Take, for example, the "Continued Conversation" feature which I launched years ago in the Google Assistant. This was positioned as an opt-in to address privacy concerns around microphone usage. While this was well intentioned, the adoption of the feature suffered due to discoverability issues. This, in turn, limited our ability to gather user feedback and truly evaluate the effectiveness of this new type of interaction (being able to ask follow up questions). The lesson learned? Opt-in features often struggle to gain traction, hindering their potential and leaving them in limbo. Whenever feasible, consider launching impactful AI features as the new default behavior, also known as “Always-On Features”...
Stage 2: Always-On Features
This next phase moves towards AI features that are simply part of the product experience, and as such are “always on”. In the case of Superhuman, the examples given were things such as - opening your inbox and seeing a one-line AI-generated summary of each message, allowing you to quickly prioritize your responses. Or an auto-response feature that drafts replies based on the content of the email and your usual communication style.
Another example of a feature I would put in this category is something I wish my Voice Memo app did. Currently, when I record a voice memo, the default title of the clip is the location at which I took it - which honestly isn’t very helpful. Most of them are roads in my neighborhood that I randomly frequent and have nothing to do with the subject of my voice memo. A much better title would be a one-line summary that was auto-generated based on the content of the clip itself - something that can easily be done with a simple call to a generative model.
The core concept of this stage is that these features don’t need to be “enabled” or “activated”, but rather, they just appear one day, and change (ideally improve!) the way you do things. While this stage offers significant efficiency gains, it also raises questions about user agency and the potential for AI to become overly intrusive. It's crucial to strike a balance between automation and user control, ensuring that individuals still feel empowered in their interactions with technology. This stage also leans into the trend of AI becoming more sophisticated and cost effective.
Stage 3: The Rise of AI Agents
The final stage goes beyond AI features enabled by a simple prompt, to a future where AI agents act on our behalves, handling tasks and making decisions with increasing autonomy. In this stage, we become the "orchestrators," setting goals and providing high-level instructions, while AI agents take care of the execution.
This scenario has the potential to dramatically change the nature of work, freeing us from mundane tasks and allowing us to focus on higher-level thinking and creativity. As agents become more and more capable, it also presents some interesting questions about the role of AI in our lives and the potential for these agents to surpass human capabilities (always a fun dinner party conversation).
And Beyond: A Glimpse into the Future
As we move towards a world of AI agents, the lines between human and machine intelligence will continue to blur. Imagine a scenario where your AI agent interacts directly with other AI agents, resolving issues and completing tasks without your direct involvement.
Perhaps we'll receive daily or hourly "debriefs" from our AI assistants, outlining the actions taken and decisions made on our behalf. This could lead to a future where work becomes more asynchronous and focused on exception handling, with AI managing the bulk of our routine tasks.
The implications of these advancements are vast. While the future remains uncertain, one thing is clear: AI is poised to revolutionize the way we live, work, and interact with the world around us. The journey from opt-in features to autonomous agents is an exciting one, filled with both opportunities and challenges. While I don’t yet know exactly how the future will look, I’m excited to be building products that will help get us there!