Continuous Autoregressive Language Models. Instead of generating one token at a time, it updates a semantic vector that can map to multiple output tokens. This provides a more continuous mode of “thinking”.
Less is More: Recursive Reasoning with Tiny Networks (blog). A small (7M) network is able to out-reason larger systems, exploiting two recursive networks. This small model is optimized to handle a certain class of puzzle; thus it cannot handle general tasks (or any language task) like an LLM. But the work demonstrates that a small iterative system can deploy remarkably strong “reasoning” effort.
Nvidia announce: RLP: Reinforcement as a Pretraining Objective (paper). They apply RL in the pre-training phase (instead of only post‑training), treating chain-of-thought as actions which can be rewarded by information gain.
OpenAI announceSora 2 (system card). More realistic, includes sound, ability to add a specific person to a scene, multiple aesthetics. The app is iOS only (for now) and emphasizes social aspects (friend invites, etc.).
OpenAI achieved gold, getting 12/12 correct (best human achieved 11/12). ChatGPT-5 was able to get 11/12 correct, and their experimental reasoning system was also able to get the last (most challenging) question correct.
Google Gemini 2.5 Deep Think achieved gold (10/12 correct).
Google DeepMind: Virtual Agent Economies. Sandboxes economies could be used to allow agents to cooperate and compete, e.g. negotiating for access to resources.
Waymo released some safety data (based 96M miles driven). The results are biased somewhat by the subset of regions/conditions that Waymo are allowed to drive (they claim that they account for that in their analysis). Nevertheless, the results are impressive, showing fewer crashes/injuries for Waymo driving compared to human.
Robots
Video, from Active Intelligent Systems (ACT) Lab at the Southern University of Science and Technology (SUSTech), shows a Unitree robot responding very nimbly to extreme perturbations. (Also, dancing.)