Situational Awareness

Leopold Aschenbrenner (previously at OpenAI) offers some unique perspectives on the future of AI. His paper “situational awareness” paints a picture of an inevitable AI-Manhatten project.

If you want to look into his arguments, here are some different formats:

It’s hard to summarize that much material. But here are my notes on the main points he argues:

  • Geopolitics will undoubtedly be at play once we get close to AGI; and definitely when ASI is at play.
  • Most people talk about AI as a project of corporate research labs (which it currently is), but as capabilities improve, it will be impossible for the national security apparatus to ignore.
  • Simple scaling arguments suggest we will reach AGI in ~2-3 years, unless we hit a barrier (he lists many). Of course, we may well hit a barrier; but caution requires us to plan assuming AGI could be very near.
  • Once you have AGI, you will achieve ASI very quickly. One of the easiest jobs to automate with AGIs will be AI research, so you will suddenly have an army of tireless AI researchers making exponential improvements. This is probably enough to go from AGI to ASI within a year.
  • Obviously, whoever controls ASI will have a massive geopolitical advantage (superhuman cyber-warfare, autonomous drone swarms, rapid development of new WMDs, optimal allocation of resources, etc.).
  • The US nuclear arsenal, the bedrock of recent global peace and security, will become essentially obsolete.
  • The corporate labs are operating like startups, with almost no regard for security. They need to transition to a strong security mindset sooner rather than later. Some of the key insights for building AGI and ASI are likely being developed right now. And those insights are not being safeguarded.
  • Obviously (within this mindset) open-sourcing anything would be irresponsible. Everything must be kept secret.
  • Western democracies are on the cusp of making a serious error, wherein they cede control of AI (and thus AGI and thus ASI and thus the future of the species) to an authoritarian regime.
  • We are very soon going to see major geopolitics (including espionage, assassinations, combat, bombing datacenters, etc.) focused on AI; as soon as more leaders “wake up” to what’s going on.
  • So, the US will aggressively pursue but lock-down AI research. It is a strategic asset. The US will invest in an enormous (multi-trillion $) Manhattan-style project to develop AGI first.
  • This will involve building massive compute clusters on US soil, investing in the research enterprise, locking it down using nuclear-weapons caliber security, and building tons of power plants (including bypassing clean energy laws if that’s what it takes to deliver the required power).
  • So, the near-future will be a contentious time period, with greater hostilities between countries and a greater threat to democracy.

His opinions are mostly predictions, but he is also prescriptive in the sense that he believes the West (and the US in particular) need to win this. I don’t agree with all his claims, but many of his points are hard to argue against. He is indeed correct that most of the general discussion on AI (across many ‘sides’) is missing some key points.

This entry was posted in AI and tagged , , , . Bookmark the permalink.

Leave a Reply