Leopold Aschenbrenner is a young entrepreneur and researcher who recently founded an investment firm focused on Artificial General Intelligence (AGI), after working on the Superalignment team at OpenAI. Graduated as valedictorian from Columbia University at age 19, Aschenbrenner has a background in economic growth research and an interest in securing the future of liberty, particularly through the advancement and responsible development of AI technology.
Key Takeaways on the Future of AI from Leopold’s ~160-page Situational Awareness paper (available as a webpage and PDF ).
Keep in mind, this is the prediction of one person, but still very interesting.
- AGI Timeline: AGI by 2027 is considered strikingly plausible, based on extrapolating recent AI progress trends. Another qualitative jump in AI capabilities, similar to the GPT-2 to GPT-4 leap, is expected within the next 4 years.
- Compute Scaling: Massive increases in AI compute are anticipated, with individual training clusters potentially costing $100+ billion by 2028 and up to $1+ trillion by 2030. However, AGI might only require a ~$100B cluster or less. These clusters will require enormous power consumption, potentially over 20% of US electricity production by 2030.
- Algorithmic Progress: Continued algorithmic improvements are expected to contribute as much to AI progress as raw compute scaling, with an estimated ~0.5 orders of magnitude improvement per year from algorithmic efficiencies.
- “Unhobbling” Gains: Significant capability jumps are expected from removing current limitations on AI systems, like enabling long-term memory, adding better reasoning, and allowing AI processes to have more autonomous operations.
- Intelligence Explosion: Once human-level AI is achieved, a rapid intelligence explosion is expected as AI systems automate AI research, potentially compressing a decade of progress into a year. This could lead to superintelligence – AI vastly surpassing human capabilities – within a short timeframe.
- National Security Implications: Superintelligent AI is framed as a critical national security issue, potentially providing decisive military and economic advantages to whoever develops it first. The free world must win this race to maintain global stability and protect democratic values.
- China’s Competitiveness: Despite current US leads, China is seen as potentially competitive in the AGI race through indigenous chip development and alleged theft of US algorithmic secrets. This is presented as a potential scenario rather than a certainty.
- Security Concerns: Current AI lab security is deemed grossly inadequate, posing a serious risk of theft of crucial algorithmic secrets and model weights by state actors. If security isn’t improved, this could potentially erase the US lead in AI and lead to a dangerous, uncontrolled intelligence explosion.
- Alignment Challenge: Reliably controlling AI systems smarter than humans is an unsolved technical problem. While potentially solvable, this poses significant risks during a rapid intelligence explosion.
- Government Involvement: A national effort, similar to the Manhattan Project, is seen as inevitable and necessary to address the security, safety, and geopolitical challenges posed by AGI and superintelligence.
- Critical Decade: The confluence of factors driving AI progress is unique to this decade. If AGI isn’t achieved in the next few years, progress may slow significantly, potentially delaying its arrival until years later.
- Economic Growth: Superintelligence could potentially lead to an economic explosion with growth rates potentially reaching or exceeding 30% per year.
- Open Source: Open-source models lagging a couple of years behind the frontier will play a role in diffusing AI technology benefits broadly.
If you enjoyed this summary, I also found the first third of this 4-hour podcast with Aschenbrenner interesting.
Hattip to Andrew Dworschak from Yakoa for sharing the Leopold Aschenbrenner paper with me.
