Saturday, November 1, 2008

Approaches to Artificial General Intelligence

This past Friday, I hosted Ben Goertzel as a colloquium speaker here at UMBC. Dr. Goertzel is a strong proponent for the development and importance of artificial general intelligence (AGI). He is one of the organizers of the AGI conference, CEO of Novamente and is working on the OpenCog project. At the talk, he discussed some background information on AGI and gave a high-level overview of OpenCog. I found his presentation as interesting as the talks he gave at last year's AGI conference.

There was one slide in which Dr. Goertzel discussed what kind of system will give rise to true AGI.

Dr. Goertzel mentioned a few approaches (admitting it is not all possibilities) to AGI, two of which I will contrast: (1) some sort of emergent system, accurate neural model of the brain, or a general theoretical perspective, and (2) an architecture or framework that pieces together different parts of computer science and artificial intelligence. I believe that (1) is not possible right now, due to lack of understanding on how the brain is organized and how the mind works. Path (2), on the other hand, may be able to create AGI relatively soon, but nobody is really sure. All we can do is make attempts at it and see what happens. Current architectures include OpenCog, SOAR, and LIDA. The more "single-theory" (a term I just invented) systems believe that the mind is organized in a particular simple way. Two examples of these (in my opinion) are Subsumption and the Memory-Prediction Framework.

Both general approaches may give rise to AGI eventually. However, I find it hard to believe that any of the current architectures will scale to Strong-AI eventually. They seem to be as limited as the AI techniques they use. Even if using lots of AI techniques together may make the overall system better, it still seems limited. Major advancements in this approach will be made by stronger AI techniques over time.

On the other hand, I think that the single-theory approach is far more robust. The downside is these current theories are just theories without much scientific backing. Some usually-intelligent fellow comes up with with a system that would seem to make intelligence. For example, On Intelligence makes plenty of sense but not amazingly hasn't given rise to Strong-AI. I personally enjoy biologically inspired approaches to AI. I think that using the brain as a starting point for figuring out to make intelligence is a decent strategy. As neuroscience advances we get closer to human-level AI. I would like to believe that the brain's organization is made out of a bunch of hierarchical fractals and patterns from which emerges what we call intelligence. Much like when first learning recursion: figuring out the recursive function is a difficult cognitive task, but once it is found it is elegant, short and easy to understand. I seriously doubt that our genetic code includes a 1:1 blueprint of our brain. I'm sure our genetic code is storing some sort of recursive production rule that creates out brain. These simple-theory methods aren't quite possible right now, but may be in the near future. Also, I believe a simple-theory approach will be more robust and scalable (much like nature) than the frameworks of today.

No comments:

Post a Comment