Although nothing is inevitable, AGI at least affords a possibility of solving many otherwise intractable human problems. These include
There will always be differences of opinion about how society should be run. But AGI could conceivably resolve many of them.
First, increased economic output could provide everyone a comfortable lifestyle, making the distribution of wealth less contentious. Eliminating work would also eliminate the standard economic justification for inequality, perhaps paving the way for a more equal and therefore less conflict-prone society.
Second, each of us could use our personal AGI assistant (and our new-found free time) to become more politically engaged, keeping better-informed about the consequences of political decisions and exerting more political influence. This, and the fact that many government functions would be run by AGI, would likely improve governance greatly.
Finally, AGIs, with their ability to communicate and negotiate quickly, could likely help us solve many of the coordination problems that otherwise make it hard for multiple parties to arrive at mutually beneficial outcomes. They could also serve as mediators, helping humans to work through disputes more productively.
Beyond just fixing those large-scale problems, AGI would also deliver a much higher quality of life on all fronts. It would handle our chores for us, provide us with much higher quality goods and services, entertain us more effectively, and so on.
It is possible that, with effort, AGI could be achieved within a 10-30 year time frame. This is not an inevitability, but it is a strong enough possibility to warrant much more effort than AGI currently gets.
The argument for why AGI may be possible in the next couple decades is as follows:
Good attacks include:
In many ways the symbol manipulation of classical AI was on the right track; its biggest weakness was a lack of learning. We need to circle back to this, rather than disparaging classical AI and symbol manipulation wholesale as has become common in modern AI.
Studying human behavior on intellectual tasks, but with more precision/formalism than psychologists.AI has largely avoided studying human cognition (our only example of general intelligence), while ideas developed within cognitive science have tended to be vague and woolly. There are a few exceptions to the above trends, but not nearly enough. Studying human reasoning and building computational models thereof is therefore a promising direction.
AGI is a sufficiently hard problem that we are unlikely to stumble across the solution. It's clear from the past 60 years of AI research that there are all kinds of AI projects that can be undertaken without yielding much progress on AGI. Yet today next to no one is articulating a clear and viable path forward on AGI, or even trying to do so. (Even within AI, only a handful of researchers have written substantively on the topic; some examples can be found in this reading list.)
We need to do better if AGI is to be achieved. We should think carefully about what sort of strategy might actually work. What hasn't been tried yet? What ideas have been dismissed too soon? What capabilities could an AGI develop for itself, so that we don't have to do it? What are the core sticking points?
On the bright side, the fact that these issues are so underexplored means that if we were to ramp up effort on them, enormous progress might happen quickly.
There is no sound chain of logic connecting neural net research to AGI. When researchers and organizations draw a link between the two, inevitably it is done in a handwavy fashion, as if it were obvious that the two are related. They are not. And with a problem as hard as AGI, one needs an explicit plan for getting from here to there; one cannot just solve a new AI problem and assume progress has been made.
An AGI needs to be able to think. Fundamentally, thinking is a matter of symbol manipulation; we can see this from the fact that every thought-like process we know how to characterize (such language, mathematics, formal logic, and computer code) is based on symbol manipulation. (Even when we think in images, this process is best described by the use of symbol systems.)
Neural nets are fundamentally not a form of symbol manipulation. They solve a completely different problem, that of function approximation. While it's true that one can mimic symbol processing with a neural network, when one does this one is only adding an unnecessary layer of complexity. It's like building a computer out of Tinker Toys. It can be done, but it doesn't mean Tinker Toys are the next big thing in computer architecture.
The first thing to be said here is that the vast majority of research with neural nets uses networks that do not particularly resemble the human brain.
But putting that aside, trying to build systems at the neural level, mimicking human brain architecture, is probably not the path to AGI (at most, it should be a secondary thrust while more direct avenues are pursued). The problem with trying to copy humans at the neural level is two-fold. First, neuroscientific understanding of the brain is nowhere near the point where it could be implemented in code, nor is it on a trajectory to attain that level of sophistication for decades, if ever. (Much of the problem is that it's physically and/or ethically impossible to conduct many of the experiments one would like to perform.) The second issue is that there is no need to copy the brain at the neural level. Neurons are a very low level of abstraction. It is not necessary to copy the neurons to get the same high-level behavior, any more than it is necessary to copy the individual transistors in one computer to make another computer behave like the first one. Instead, we should be looking at the high-level behavior of the brain, which is much more accessible and understandable, and trying to reproduce that.
The primary reason some people cite for not pursuing AGI is the idea that it could be risky, because the AGI might have a "mind of its own" and get out of control.
The position that AGI should be deferred because of safety can be countered at two levels. We will present both, but note that you don't need to agree with Counter #1. The conclusion that we should be aggressively pursuing AGI goes through even if you only agree with Counter #2.
This argument is presented in detail here, so for now please consult that article.
In addition to the ground covered there, another reason to be wary of the "out of control AI" idea is that most AI researchers don't buy it. (The only prominent exception we're aware of is Stuart Russell, and as can be seen here, he makes the error of assuming an AGI needs to be given some "ultimate goal".) Most of the notable people raising the alarm about AI risk have no background, training, or research experience in AI.
Again, we emphasize that you don't need to believe Counter #1. If you don't, you can and should still support the idea of pursuing AGI along a research path that looks safe. Accordingly, Counter #2 is this:
There are two main ideas that tend to come up in arguments for why AGI might end up being unsafe.
One is the idea that an AGI will necessarily have a "top-level goal". This does not really make sense; there is simply no need to give an AGI a top-level goal and allow it free rein to pursue that goal however it pleases. It's far more likely that we will "micromanage" our AGIs by giving them bite-size goals and numerous restrictions to operate within. See this article for more detail on that idea.
The second concern is that an AGI will be inscrutable: a black box whose inner workings or motivations we cannot discern. This is the kind of system that might conceivably result if AGI took the form of a large neural network. Lack of visibility into such a system could be dangerous.
However, that is not the only way an AGI could be built (actually, as we have argued earlier in this document, it is a particularly unlikely one, but that's a tangential point). The alternative would be a system more like current cognitive architectures, where reasoning and behavior is symbolic, introspectable, and understandable. Working towards an AGI along those lines (which is actually the more common vision among AGI researchers) would therefore be a safe path.
Introspectable, human-managed AGI constitutes a safe path forward.
There are three key problems with such a strategy.
First, AGI is probably inevitable, and if people concerned about AI risk don't get involved, the design and deployment of AGI will be controlled by people who are (a) not as philanthropically-minded (b) unconcerned with AI risk. This could only lead to worse outcomes.
Second, most of the people arguing that AGI would be dangerous only suggest deferring it while we work to make it safe, but working on AGI safety without working on AGI is a contradiction in terms (see here for more on that).
Third, even if AGI did pose risk, that risk needs to be weighed against the massive benefits overviewed at the beginning of this document, and against the substantial risk that our civilization will destroy itself through war or environmental destruction if we don't make some radical change (such as creating AGI).
If you only agree with Counter #2 and not Counter #1, chances are that (from a practical point of view) you are in violent agreement with many AGI researchers. Much of the work that could be done now on AGI involves no realistic risk and moves us towards systems that don't have the characteristics sometimes seen as unsafe.Counterarguments
This is the argument made by AI risk groups like MIRI and FHI. The problem is that we have no means to "figure out how to make AGI safe". As argued here, this isn't even a real problem, at least not in the way it is usually framed. Even if it were, we would have no tools at our disposal to address it except AGI research itself. AI risk organizations like MIRI who are trying to "solve safety" without working on AGI itself are chasing a phantom and the resulting research has no credible connection to AGI safety.
Here are three major problems with this body of work that render it inapplicable to realistic AGI safety scenarios:
In effect, AI safety researchers have set themselves a research agenda that can be pursued indefinitely without success and without any impact on the safety profile of AGI. To wait for that research agenda to be completed before starting on AGI would be absurd.
AGI is a game-changer in a way that no other foreseeable technology is. While we humans, given enough time, might be able to achieve many of the same things AGIs could, we will never be able to match the speed or efficiency artificial minds would bring.
The question we should collectively be asking ourselves is: do we want to spend a billion dollars and 10 years to develop each new pharmaceutical, or do we want to spend a billion dollars and 10 years to develop AGI, and then get an endless supply of new drugs (and a solution to every other problem) for free? The numbers are made up but qualitatively this is an accurate picture of the situation.