One of my most intriguing takeaways from TED AI was a provocative idea that stated that generative models make a society of 30,000 as effective as a tribe of 150. This sparked a fascinating discussion about the tension between the wisdom of a large group and the dysfunction that often arises when organizations grow too big.
Think about it: five people are undoubtedly smarter than one. But are five thousand people necessarily better than 50? Not always. In large groups, the collective knowledge is vast, but the overhead of process, coordination, and communication can outweigh the benefits. Decision-making slows, process goes up, and innovation can stifle. What you’ll find is that the value-add of each individual not only diminishes at a certain point, it actually can become a net negative.
This isn't a new phenomenon. We see it all the time in traditional organizations, especially when building product teams. There is a common misconception that throwing more resources at a problem automatically leads to more knowledge and better results. But often, it's the opposite. More resources can be detrimental, hindering efficiency and the ability to execute.
So, where does AI fit into this? Large language models (LLMs) present a fascinating twist. Imagine having the wisdom of five million people accessible at the speed and efficiency of a team of 50. That's one of the promises of LLMs. They can tap into a vast collective of knowledge and distill it into actionable insights, bypassing the traditional bottlenecks of human collaboration.
But the potential of LLMs extends even further. We're on the cusp of unlocking something even more transformative: AI agents. These aren't just knowledge repositories; they're entities capable of acting on that knowledge, of autonomously performing tasks and achieving goals. Imagine an AI agent that can design and execute a marketing campaign, develop and debug software, or even discover new drugs – all while operating with the speed, efficiency, and collective intelligence of a super-powered team. This is where things get truly exciting. The convergence of massive knowledge bases, efficient processing, and autonomous action has the potential to revolutionize industries and transform how we work.
This shift is profound. It forces us to rethink many of the patterns we've established about organizational structure, team dynamics, and even the very nature of knowledge work. Assumptions that held true for decades are suddenly no longer a given. How do we adapt and react to this new paradigm? How do we structure our teams and workflows to leverage the power of generative models while retaining the crucial elements of human creativity, intuition, and judgment?
These are just some of the questions that TED AI touched on. The landscape is shifting rapidly, and we’re all figuring this out together. But one thing’s for sure: the future of work and human collaboration and interaction is inextricably linked to the evolving dance between our minds and the artificial ones we create.
One particularly thought-provoking question lingered after the conference: What are the problems that humans need to solve, and are those the very problems that are too hard for us to solve alone? If so, perhaps AI isn't just an option, but a necessity. Perhaps the true breakthrough lies not in building ever-larger models, but in fostering a deeper, more nuanced understanding of the interplay between our minds, the artificial ones we create, and the actual physical world.
This interplay between minds and models is just beginning. And while I left TED AI with more questions than answers, there is a palpable sense of excitement – and a growing commitment to exploring the uncharted territory that lies ahead. Because the future of intelligence, it seems, is leading us towards a collaborative one.