Incomplete ideas at the intersection of Technology, Economics, Cybernetics, and Decision Science.
The gap between AI's current capabilities and public understanding is striking. Consider this: While most corporate training programs require weeks of reading manuals and shadowing colleagues, AI can now instantly parse decades of company documents and guide a new hire through complex processes.1 Yet this barely scratches the surface.
While public attention focuses on chatbots and image generators, AI is quietly revolutionizing scientific discovery. Researchers are using AI to predict protein structures that unlock new drug treatments2, discover novel materials3, and accelerate physics experiments4—achievements that once took years now happen in days.
But the deeper question lies in how this reshapes our world. Imagine not just a 50-person department being augmented by AI, but a single agent orchestrating an entire company's operations or supply chain. Better yet, an entire company consisting of two agents— a human and an AI. We're approaching an era of 'intelligence too cheap to meter,'5 yet we rarely discuss how this transforms our institutions, businesses, and global economies with any depth.
The implications are profound:
Individual productive capacity will surge
Access to advanced AI will create new forms of inequality
The boundary between thought and large-scale execution will blur
Our fundamental organizational structures will need reinvention
This blog is my attempt to reason about these transformations systematically. Drawing inspiration from diverse thinkers—Rich Sutton's hypotheses on cognition6, Stafford Beer's cybernetic principles, and the methodical approaches of von Neumann and Wiener—I aim to explore these questions through a lens of semi-rigorous trial and error.
As Terence Tao suggested7, writing helps crystallize understanding and I want to use this blog to synthesize my understanding of our rapidly changing world.
If you're interested in the intersection of AI, organizational design, and societal transformation, join me in building a community focused on navigating this unprecedented shift.
There are a myriad of examples for this but the easiest demonstration of current capabilities is in products like Glean.
I think it’s interesting that the first Nobel in Chemistry enabled by AI breakthroughs also happened to exemplify the growing trend of commercial AI labs and was the shortest from conception to award.
This study also highlights the vast disparities that arise due to the magnified aptitude enabled by AI. Judgment and prior skill levels enhance rates of productivity amongst material scientists.
While the phrase’s origin is unclear its most pronounced use was by Sam Altman during OpenAI’s release of the GPT 4o-mini model. The understanding is that as models increase in their capacity, their costs also seems to be declining as algorithmic improvements enable access to smaller but also highly capable models. Smaller models today are sometimes more powerful than frontier models of a year before. A rough estimate of this progress has been sketched out by Leopold Aschenbrenner.
Sutton writes a blog called Incompleteideas where he encapsulates his thinking on what algorithms and computations constitute cognition.
Terence Tao on the benefit of writing expository material.
