AI News 2024-10-17

General

  • Anthropic CEO Dario Amodei has published an opinion piece about the future of AI: Machines of Loving Grace: How AI Could Transform the World for the Better. While acknowledging the real risks, the piece focuses on how AI could bring about significant benefits for humankind.
    • Max Tegmark uses this as an opportunity to offer a rebuttal to the underlying thesis of “rapidly developing strong AI is a net good”: The AGI Entente Delusion. He views a competitive race to AGI as a suicide race, since efforts to align AI are lagging our ability to improve capabilities. He proposes a focus on Tool AI (instead of generalized AI), so that we can reap some of the benefits of advanced AI, with fewer of the alignment/control problems. This view focuses on government regulation proportionate to capability/risk. So, in principle, if companies could demonstrate sufficiently controllable AGI, then it could meet safety standards and deployed/sold.
  • (Nuclear) Energy for AI:
    • The US Department of Energy is committing $900M to build and deploy next-generation nuclear technology (including small reactors).
    • Google announced it will work with Kairos Power to use small nuclear reactors to power future data centers.
    • Amazon is investing $500M in small modular reactors, to expand genAI.
    • A group (Crusoe, Blue Owl Capital, and Primary Digital Infrastructure) announced $3.4B joint venture to build a 200 MW datacenter (~100k B200 GPUs) in Texas. Initial customers will be Oracle and OpenAI.
    • The growing commitments to build-out power for datacenters makes it increasingly plausible that AI training will reach 1029 FLOPS by 2030 (10,000× today’s training runs).
  • Here is an interesting comment by gwern on Lesswrong (via this), that explains why it is so hard to find applications for AI, and why the gains have been so small (relative to the potential):

If you’re struggling to find tasks for “artificial intelligence too cheap to meter,” perhaps the real issue is identifying tasks for intelligence in general. …significant reorganization of your life and workflows may be necessary before any form of intelligence becomes beneficial.

…organizations are often structured to resist improvements. …

… We have few “AI-shaped holes” of significant value because we’ve designed systems to mitigate the absence of AI. If there were organizations with natural LLM-shaped gaps that AI could fill to massively boost output, they would have been replaced long ago by ones adapted to human capabilities, since humans were the only option available.

If this concept is still unclear, try an experiment: act as your own remote worker. Send yourself emails with tasks, and respond as if you have amnesia, avoiding actions a remote worker couldn’t perform, like directly editing files on your computer. … If you discover that you can’t effectively utilize a hired human intelligence, this sheds light on your difficulties with AI. Conversely, if you do find valuable tasks, you now have a clear set of projects to explore with AI services.

Research Insights

Safety

LLM

AI Agents

Audio

Image Synthesis

  • Abode presented Project Perfect Blend, which adds tools to Photoshop for “harmonizing” assets into a single composite. E.g. it can relight subjects and environments to match.

Vision

Video

World Synthesis

Science

Cars

  • At Tesla’s “We, Robot” event, they showed the design for their future autonomous vehicles: Cybercab and Robovan. The designs are futuristic.

Robots

This entry was posted in AI, News and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a Reply