AI News 2024-11-07

General

Research Insights

  • Agent S: An Open Agentic Framework that Uses Computers Like a Human (code).
  • TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters. Treats model parameters as tokens, so that input queries become attentional lookups to retrieve model parameters. This leads to an efficiency improvement when scaling.
  • How Far is Video Generation from World Model: A Physical Law Perspective (preprint, code, video abstract). They train a video model on simple physics interactions. The model generalizes perfectly within-distribution, but fails in general when extrapolating out-of-distribution. This implies the model is not learning the underlying physics.
    • A valid question is whether they provided enough coverage in training, and enough scale (data, parameters, training compute) to actually infer generalized physics. It’s possible that at a sufficient scale, robust physics modeling appears as an emergent capability.
    • Conversely, the implication might be that generalization tends to be interpolative, and the only reason LLMs (and humans?) appear generalized is that they have enough training data that they only ever need to generalize in-distribution.
  • Mixtures of In-Context Learners. Allows one to extract more value from existing LLMs, including those being accessed via cloud (weights not available). The method creates a set of different “experts” by calling an LLM repeatedly with different in-context examples. Instead of just merging or voting on their final responses, one can try to consolidate their responses at the token level by looking at the distribution of predictions for next token. This allows one, for instance, to provide more examples than the context window allows.
    • It would be interesting to combine this approach with entropy sampling methods (e.g. entropix) to further refine performance.
  • AI swarms require communication between agents, but right now there are many competing methods for multi-agent coordination (Camel, Swarm, LangChain, AutoGen, MetaGPT). Researchers at Oxford have proposed a scheme (Agora) for AI agents can auto-negotiate a structured protocol: A Scalable Communication Protocol for Networks of Large Language Models (preprint).

LLM

  • Anthropic added visual PDF support to Claude. Now, when Claude ingests a PDF, it does not only consider a textual conversion of the document, but can also see the visual content of the PDF, allowing it to look at figures, layout, diagrams, etc.
  • Anthropic releases Claude 3.5 Haiku, a small/efficient model that actually surpasses their older large model (Claude 3 Opus) on many benchmarks.

Tools

  • Google is now making available Learn About, a sort of AI tutor that can help you learn about a topic. (Seems great for education.)

Image Synthesis

Audio

Video

World Synthesis

  • Neural reproductions of video games are impressive. We’ve seen Doom, Super Mario Bros., and Counter-Strike.
    • Now, Decart AI (working with Etched) are showing a playable neural-rendered video game (basically Minecraft). Playable here (500M parameters, code). Right now, this is just a proof-of-principle. There is no way for the game designer to design an experience, and the playing itself is not ideal (e.g. it lacks persistence for changes made to terrain). It feels more like a dream than a video game. But the direction this is evolving is clear: we could have a future class of video games (or, more broadly, simulation environments) that are designed using AI methods (prompting, iterating, etc.), and neural-rendered in real-time. This would completely bypass the traditional pipelines.
      • To underscore why you should be thinking about this result in a “rate of progress” context (rather than what it currently is), compare: AI video 2022 to AI video today. So, think about where neural-world-rendering will be in ~2 years.
    • And we now also have GameGen-X: a diffusion transformer for generating and controlling video game assets and environments.

Science

  • Anthropic’s “Golden Gate Claude” interpretability/control method consists of identifying legible features in activation space. Researchers have applied this mechanistic interpretability to understanding protein language models. They find expected features, such as one associated with the repeating sequence of an alpha helix or beta hairpin (visualizer, code, SAE). More fully understanding the learned representation may well give new insights into proteins.
    • More generally, it is likely a very fruitful endeavor to train large models on science data, and search in a feature space for expected features (confirm it learned known physics), and thereafter search for novel physics in the space.

Robots

This entry was posted in AI, News and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply