How Smart will ASI be?

The development of AI is pushing towards AGI. To many, once you have AGI, you quickly and inevitably achieve ASI (superintelligence), since AGI can do AI research and thus AI iteratively self-improves (at an exponential rate). Others sometimes doubt that ASI can exist; they wonder how AI could ever be smarter than humans.

Here, let us try to enumerate how AI might be smarter than a human.

0. Human-like

A basic assumption herein is that human-level general intelligence can be reproduced. So a sufficiently advanced AI would be able to do anything a human can do. This captures more than just book smarts and mathematical or visual reasoning; our ability to socialize is also an aspect of intelligence (theory of mind, etc.).

1. Focused

A simple way in which a human-level AI would immediately be superhuman is due to focus. The AI can be more motivated, focused, attentive, single-minded, etc. Removing myriad foibles/weaknesses from a human would make them superhuman in output.

2. Fast

Once we can simulate human-level thinking in silico, there’s no reason it can’t be accelerated (through better hardware, algorithmic optimizations, etc.). A single human, if thinking sufficiently quickly, is already quite superhuman. Imagine if, for every reply you need to give, you are allowed to spend endless time researching the best answer (including researching the background of the person asking the question, tailoring it to their knowledge and desires). You can scour the literature for the right information. You can take your time to make sure your math is right. In fact, you can write entire new software stacks, invent new data analysis procedures, tabulate endless statistics. Whatever you need to make your answer just a little bit better.

3. Replicated

The computational form of AI makes it easy to replicate. So, in addition to “spending time thinking,” one can also launch numerous parallel copies to work on a problem. The diverse copies can test out different approaches (using different assumptions, different subsets of the data). The copies can double-check each other’s work. In principle, for any question asked to the AI, a vast hierarchy of agents can be launched; some reading sources, others analyzing data, others collecting results. Imagine, that for every decision, you could leverage a planetary-scale of intellectual effort, all focused precisely on solving your task.

There is little doubt that human organizations exhibit some form of superhuman capability in terms of the complexity of projects they execute. The AI equivalent is similar, but far more efficient since the usual organization frictions (lack of motivation in individuals, misaligned desires among participants, mistrust, infighting, etc.) are gone.

The sheer potential scale of AI parallel thinking is a staggering form of superhuman capability.

4. Better

In principle, an AI brain could be better than a human one in a variety of ways. Our cognition is limited by the size of working memory, by how convoluted a chain we can hold in our heads, by the data-rates of our sensory inputs, by fragility to distractions, etc. All of these are, in principle, improvable.

5. Tunable

Human brains are subject to various modifications, including some that can improve task performance. Certain drugs can induce relaxed or heightened states that might maximize focus, or creativity, or emotions, etc. Certain drugs (e.g. psychedelics) seem even able to “anneal” a mind and help one escape a local minimum in thought-space (for better or worse). In humans, these interventions are brute and poorly-understood; yet even here they have predicable value.

In AI, interventions can be much more precise, reproducible, controllable, and powerful. (And need not have side-effects!) One can, in principle, induce target mental states to maximize particular behaviors or capabilities. In this sense, AI could always have the “ideal” mental state for any particular task.

6. Internalized Scaffolding

It is worth noting that a large fraction of human intelligence comes not from our “raw brainpower” but from the scaffolding we have put on top, which includes language, concepts, math, culture, etc. For instance, our brains are roughly equivalent to the brains of ancient humans. We are much smarter, in large part, because we have a set of heuristics (passed down through culture, books, etc.) that allow us to unlock more “effective intelligence.” Some of our most powerful heuristics (math, logic, science, etc.) do not come so easily to us.

For AI, there is no need for this scaffolding to be external and learned. Instead, it could be more deeply embedded and thus reflexive. Arguably, modern LLMs are some version of this: the complexity of modern concepts (encoded in language) become built-in to the LLM. More generally, an AI could have more and more of this complex scaffolding internalized (including reflexive access to myriad source documents, software, solvers, etc.).

7. Native Data Speaker

Humans can speak (and think) “natively” using language, and learn to understand certain concepts intuitively (everyday physics, “common sense,” etc.). We then attempt to understand other concepts in analogy to those that are intuitive (e.g. visual thinking for math). An AI, in principle, can be designed to “think natively” in other kinds of data spaces, including the complex data/statistics of scientific data sets (DNA sequences, CAD designs, computer binary code, etc.). And these diverse “ways of thinking” need not be separated; they could all be different modalities of a single AI (much as humans can think both linguistically and visually).

By being able to “natively” think in source code, or symbolic math, or complex-high-dimensional-data-structures, the AI could exhibit vastly improved reasoning and intuition in these subject areas.

8. Goes Hard on Important Tasks

Humans are, mostly, unaccustomed to what can be accomplished by truly focusing on a task. The counter-examples are noteworthy as they are so rare: humans were able to design an atomic weapon, and send a person to the moon, in a relatively short time owing to the focus. The organizations in question “went really hard” on the task they were assigned. Most organizations we see are enormously inefficient in the sense that they are not, really, single-mindedly focused on their nominal task. (Even corporate entities trying to maximize profit at all cost are, in the details, highly fractured and inefficient since the incentives of the individual players are not so aligned with the organizational goal. Many organizations are not, in actual fact, pursuing their nominal/stated goal.) The jump in effective capability one sees when an organization (or entity) “really goes hard” (pursues their goal with unrestrained focus) can be hard to predict, as they will exploit any and all opportunities to advance their objective.

9. Goes Hard on Small Tasks

It is also worth considering that an AI can (due to its focus) “go hard” on even the small things that we normally consider trivial. Humans routinely ignore myriad of confounders, give up on tasks, or otherwise “don’t sweat the small stuff.” This is adaptive for ancestral humans (avoid wasting effort on irrelevant things) and modern humans (don’t stress out about minutia!). But an AI could put inordinate effort into each and every task, and sub-task, and sub-sub-task. The accumulation of expert execution of every possible task leads to enormous gains at the larger scales. The flip side to the curse of small errors compounding into enormous uncertainty, is that flawless execution of subtasks allows one to undertake much more complex overall tasks.

A simple example is prediction. As a human predicts what someone else will do, their thinking quickly dissolves into fuzzy guesses; and they give up predicting many moves ahead. The space of options is too large, and the error on each guess in the chain too large. But, with sufficient effort in each and every analysis, one could push much, much harder on a long predictive chain.

10. Unwaveringly Rational

Humans know that rational thinking is, ultimately, more “correct” (more likely to lead to the correct answer, achieve one’s aims, etc.). Yet even those humans most trained in rational thinking will (very often!) rely on irrational aspects of their mind (biases, reflexive considerations, intuitions, inspiration, etc.) when making decisions. Simply sticking to “known best practices” would already improve effectively intelligence. An AI could go beyond this even, by exploiting rigorous frameworks (Bayesian methods, etc.) to be as rationale as possible.

(This need not compromise creativity, since this is also subject to rigorous analysis: optimal amount of creativity, efficient randomization schedules, maximize human enjoyment, etc.)

11. Simulation

With sufficient computer power, a broad range of things can be simulated as part of a thinking process. Complex physical setups, social dynamics, and even the behavior of a single person could be simulated as part of the solution to a problem. Uncertainties and unknowns can be handled by running ensembles of simulations covering different cases. Humanity has reached superhuman weather forecasting by exploiting dense data and complex simulations. An ASI could, in principle, leverage simulations tailored to every subject-area to similarly leverage superhuman predictions to inform all decisions.

12. Super-Forecasting

A combination of the features described above (rational, going hard, simulation) should enable ASI to be incredible forecasters. By carefully taking account of every possible factor (large and small), and predicting the possible outcomes (using logic, detailed simulation, etc.) one can generate chains of forward-predictions that are incredibly rich. Uncertaintites can be handled with appropriate frameworks (Bayesian, etc.) or compute (Monte Carlo, etc.). Of course one is always limited by the available data. But humans are provably very far from fully exploiting the data available to them. An ASI would be able to make predictions with unnerving accuracy over short timescales, and incredible utility for long timescales.

13. Large Concepts

There are certain classes of concepts which can make sense to a human, but which are simply too “large” for a human to really think about. One can imagine extremely large numbers, high-dimensional systems, complex mathematical ideas, or long chains of logical inference being illegible to a human. The individual components make sense (and can be thought about through analogy), but they cannot be “thought about” (visualized, kept all in memory at once) by a human owing to their sheer size.

But, in principle, an AI could be capable (larger working memory, native visualization of high dimensions, etc.) of intuitively understanding these “large” concepts.

14. Tricky Concepts

There are some intellectual concepts that are quite difficult to grasp. For the most complex, we typically observe that only a subset of humans can be taught to meaningfully understand the concept, with an even small subset being sufficiently smart to have discovered the concept. One can think of physics examples (relativity, quantum mechanics, etc.), math examples (P vs. NP, Gödel incompleteness, etc.), philosophy examples (consciousness, etc.), and so on.

If AGI is possible, there is no reason not to expect AI to eventually be smart enough to understand all such concepts, and moreover to be of a sufficient intelligence-class to discover and fully understand more concepts of this type. This is already super-human with respect to average human intelligence.

Plausibly, as AI improves, it will discover and understand “tricky concepts” that even the smartest humans cannot easily grasp (but which are verifiably correct).

15. Unthinkable Thoughts

Are there concepts that a human literally cannot comprehend? Ideas they literally cannot think? This is in some sense an open research question. One can argue that generalized intelligence (of the human type) is specifically the ability to think about things symbolically; anything meaningfully consistent can be described in some symbolic way, hence in a way a human could understand. Conversely, one could argue that Gödel incompleteness points towards some concepts being unrepresentable within a given system. So, for whatever class of thoughts can be represented by the system of human cognition, there are some thoughts outside that boundary, which a greater cognitive system could represent.

Operationally, it certainly seems that some combination of large+tricky concepts could be beyond human conception (indeed we’ve already discovered many that are beyond the conception of median humans). So, it seems likely that there are thoughts that a sufficiently powerful mind would be able to think, that we would not be able to understand. What advanced capabilities would such thoughts enable? It’s not easy to say. But we do know, from the course of human history, that progressively more subtle/refined/complex/powerful thoughts have led to corresponding increases in capabilities (math, science, technology, control over the world, etc.).

16. Emergence

The enumerated modes of increased intelligence will, of course, interact. A motif we can expect to play out is the emergence of enhanced capabilities due to synergy between components; some kind of “whole greater than the sum of the parts” effect. For humans, we of course see this, where a synergy between “raw intelligence” and “cultural scaffolding” (education, ideas, tools, etc.) leads to greatly improved capabilities. For ASI, the advantages in multiple directions could very well lead to emergence of surprising capabilities, such as forecasting that feels precognitive or telepathic, or intuition that feels like generalized genius.

Conclusion

The exact nature of future ASI is unknown. Which of the enumerated “advantages” will it possess? How will they interact? To what extent will capabilities be limited by available computation, or coherence among large computational systems (e.g. lag times for communicating across large/complex systems)? These are unknowns. And yet, it seems straightforward to believe that an ASI would exhibit, at a minimum, a sort of “collection of focused geniuses” type of super-intelligence, where for any given task that it seeks to pursue, it will excel at that task and accomplish it with a speed, sophistication, and efficiency that our best organizations and smartest people can only dream of.

Overall, we hope this establishes that ASI can, indeed, be inordinately capable. This makes it correspondingly inordinately useful (if aligned to humans) and inordinately dangerous (if not).

This entry was posted in AI, Philosophy and tagged , . Bookmark the permalink.

Leave a Reply