The Economics of Artificial General Intelligence Takeoff
There is a corner of the web where super smart people debate the future of artificial intelligence, and in that corner, there is an ongoing debate about whether we will experience what is known as a fast takeoff scenario for Artificial General Intelligence (AGI), or what these folks call – “foom” (as in the sound effect for something sudden).
Artificial General Intelligence, for those who aren’t familiar with the term, is the kind of AI we tend to see in movies – very human-like. It’s not the kind of narrow AI that just beat Lee Sedol in the game of Go, or the kind that will drive a car or sort your pictures on Google Photos. Artificial General Intelligence is something quite remarkable. It doesn’t exist (yet), but if and when it does, it will be a total game-changer.
The question that I tend to think a lot about are the economic structures that would lead to something like this. How might an AGI actually get built? Would it be one company? A guy or gal in a basement? Or will it have to be a much larger collaborative effort? There are many reasons why this matters, but the one I’m most interested in centers on the notion of what I call “the code behind the code.” All designed systems have underlying assumptions, biases and intentions baked into them, and whatever form of collaboration it is that builds an AGI will have this code behind the code.
So, it’s interesting to me to see a debate going on right now about how localized the actual development process might be in the lead up to an AGI:
It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.
However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.
See more at: http://www.overcomingbias.com/2016/03/how-different-agi-software.html#sthash.KLjFPGam.dpuf
#agi #artificialintelligence
??
The money machine is an AGI with human gears and the non-human goal of counting money as much and frequently as possible.
Alejandro Perales Sarah Palin is proof that there exists a market for word salads, yet maybe you need to do an effort in the direction of routing your production where demand is found.
AGI is not easy, it’s like physicists looking for a theory of everything … I would be very happy with a some sort of “standard model” … I think be creative to make a general learner for a specific topic can be one of the most value human can do …
Between where we are, in the land of hand-crafted application-specific machine learning and the land of general artificial intelligence, there is a very deep gulf. I don’t think that gulf is very wide, and it takes but a bridge that spans it to get us over there.
That bridge is not going to be some magic algorithm or simple neural network structure like deep learning or recurrent NN. It’s going to take structural engineering on a much broader scale.
I propose that the bridge to AGI must be a larger architecture that at least derives from the gross structure of the mammalian brain/body combination. Looking at the brain, most of us see the cortex because that’s the outer, visible part. It has a relatively simple structure, and it’s the part that gets all the attention in AI design.
In a real brain, the cortex would be nothing without the parts it encloses: the group of structures called the limbic system and the thalamus, and below them, the brainstem and cerebellum
Those parts are a very complex system of very many interconnected components that take input from the senses going up to the cortex and input from the cortex going down to the body (and the senses). The limbic system provides the emotions and modulates all brain activity with a reward mechanism that interworks with conceptual goals and autonomic homeostatic mechanisms.
An AGI will and must have corresponding structures, must have a sensate ‘body’ of some kind, and must learn from experience by ‘living’ and acting in a ‘real world’ of some sort. An AGI that is meant to function in human society must have a body with senses and neural structures that are as close to human as possible. That would enable sympathy, empathy, and most basically, mutual understanding.
Joe Repka Why assume AI needs to be human like at all when we see “intelligence” in animals around us? In particular, dolphins and whales!
In general, we don’t make that assumption at all. Only if we expect a truly social humanoid intelligence does the AGI need to be similar to humans in architecture (senses, specific brain structure correlates, ability to act, etc.) If you want a bat AI instead, for example, then you find design inspiration in the brain/body of bats.
I suppose we could try to imagine some genuinely alien AI and construct one, but I haven’t thought much in that direction.
Great point, Joe Repka. I feel the same way about the brain; sure, the neocortex if powerful and interesting, but it wouldn’t be able to do jack without all of the earlier architecture that underlies it.
And, on the question of architecture, I also agree that what will come after architectures that connect the various types of new intelligence that we are now building. Even the techniques for combining techniques that the DeepMind folks demonstrated hints at that, I think.
Super nic
Joe Repka I think we’re still decades away from a humanoid A.I. based on our brain. People are confusing the current progress in Intelligent Technologies with generic A.I. There is no reason to assume one leads to the other at all.
However, just as we stumbled into this era (made possible by Moore’s Law and Big Data) I think we might eventually accidentally create a non-humanoid generic A.I. which leverages our technology to achieve “emergence”. (I am basing this speculation on Complexity Theory.) By running millions of experiments we may actually just be accelerating Evolution!
+1
666 ~ .666 ~ 2/3 ~ 1 – 1/3
I read Asimov stories and i beleave he its right,A.I its necesary and humanity could benefit from it
super cool!
Diana NN
I think language will be the key to AGI and any associated foom!, regardless of the type of team involved. I think it’s generally accepted that we humans made our biggest, most significant leap in intelligence after we acquired language, so I’m guessing that the first team to successfully integrate general language capabilities into an AI such that general language, sensory, and cognitive systems feed back into each other in a constant, multi-way learning system that functions similarly to how human children learn language and learn from language will be the team to score the biggest foom.
I think you are probably right, Shawn McClure, at least for an AGI that is human-like. Understanding language is going to be a huge learning initiative, with a great deal of feedback required (at least based on current technology). From that perspective, I give Google by far the best shot at figuring this out.
I’d lay some money on Google as well, because they’re essentially crowd-sourcing the construction of semantic AI as we speak, whether intentionally or by default. One could think of Google Search – millions of people entering real-language search phrases and then selecting the results they find most relevant – as a worldwide exercise in building the most vast and comprehensive training set possible for linguistic neural nets that could in turn become the foundation for a semantically-aware AI. SkyNet may be closer than we think… 😉
Exactly. Access to huge crowds is essential for huge training feats.
Hi!
Did u make this
Cool