Event Summary

Talking AGI with Prof. Julian Togelius

How many angels can dance on the head of a pin? It's a "pointless and useless question that has no bearing on reality", claims Julian, "this is how superintelligence and AGI debates make me feel".
By Kevin Frans
|
February 26, 2021
Prof. Julian Togelius speaking via zoom
READ THE PAPER

Last month at Cross Roads #20, we had the pleasure of welcoming Prof. Julian Togelius, on the grand topic of "Artificial General Intelligence at the End of the Rainbow". Currently a researcher studying AI and games, Julian started off with an interest in philosophy and psychology, so of course, he is often asked about the rise of superintelligence — AI that will keep improving themselves. "Is it true that AI will take over the world? Like in Terminator?", Julian asks. "How will we control them? Will they treat us like flies? They must be ethical because they're so smart, right?"

Actually, to Julian, these are questions that are barely worth considering. We don't even know what we're trying to debate here, he claims. What is a superintelligence? If we can't define this, there's no point arguing about what an imaginary AI would do. Julian refers to a famous quote from the mathemetician I.J. Good:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

"Makes sense, if you think a human can design an intelligent machine", Julian states. But have you considered what it takes to build a machine? Almost every appliance we have today is not the work of a single human, but the product of a complex supply chain across the world. To build a laptop, materials have to be assembled from dozens of countries, and even then the assembly can only happen in a few key factories. "A human cannot build a computer ... millions of humans in a complicated network can build a computer".

Even in software, this need for collaboration still holds true. Tensorflow, the most used deep learning library, is maintained by a full team and has 2.5 million lines of code. The Linux core to run programs at all was built by thousands of developers. "Improvements in AI capability are dependent on improvements across the whole stack", argues Julian. Even if an AI could design a better network than itself, it would soon reach a bottleneck in GPU power, or in storage systems. Julian concludes, "a human cannot develop an artificial intelligence. A civilization can."

Next on the list of myths to debunk, are human beings even general intelligences? The dream of artificial general intelligence is to develop AI that can solve all tasks, not just a narrow subset of them. We often take inspiration from human behavior. But are humans actually capable of solving any kind of task?

According to Julian, there are two ways to interpret this claim. First, a generally intelligent human can solve every task that another human can. This is not true — many tasks require specialization and years of learning. While as a civilization we may be able to build airplanes and submarines, a single human would be hard-pressed to do so. The second interpretation is that in principle, a generally intelligent human can solve any cognitive task. Again, Julian claims this is false — human brains have a limit and are already surpassed by computers on memory-intensive tasks, such as calculating digits of pi.

The only way superintelligence works, proposes Julian, is if we define intelligence as "whatever is needed to construct artificial intelligence". In this view, our whole civilization is a superintelligence — but a single human is not. "This is why I don't fear superintelligence. The latest AI will contribute to our superintellgence as a whole ... but not in itself".

So why study AGI at all? Well, even if self-improving AGI won't magically take over the world, there's still a benefit to introducing generality into existing AI. Especially in reinforcment learning, AI-learned policies are "extremely narrow and brittle ... You think you have an AI that can play Pac-man. But in fact, you can only play this particular level. You can only play in this screen resolution. You can't play if you shift the colors ... and so on." There's a smaller subset of artificial general intelligence — artificial game intelligence — where we try to create AI that can play any kind of game you could find online or on the App Store. Current algorithms largely fail at this, and developing a general game-playing AI is a problem that many are now hoping to solve. "I don't want to say that I won't be able to do this in a lifetime ... but I think I will have to live very long to be able to do this, so I'll have to live healthily", Julian concludes, as he raises and finishes his glass of wine.

Julian's full Cross Roads presentation is available on YouTube now.