Wednesday, September 05, 2007

Singularity Summit 2007

The Singularity Summit 2007 is about to start. I wish I could be there, but I will have to content myself with being an observer on the sidelines.

They say in the introduction on the website:

The Singularity Institute for Artificial Intelligence presents the Singularity Summit 2007, a major two-day event bringing together 18 leading thinkers to address and debate a historical moment in humanity's history – a window of opportunity to shape how we develop advanced artificial intelligence.

The introduction goes on to explain what "The Singularity" is:

For over a hundred thousand years, the evolved human brain has held a privileged place in the expanse of cognition. Within this century, science may move humanity beyond its boundary of intelligence. This possibility, the singularity, may be a critical event in history, and deserves thoughtful consideration.

That sounds like hype, but it is not.

Artificial intelligence is a very active area of academic research, and useful software continuously spins-off from this research. All such software applications are very specific in their scope, but their very specificity means that each such application can excel in its own chosen area. Currently, there is no general-purpose artificial intelligence software that can, like a human can, excel at a wide range of activities. The idea of The Singularity is that sometime during the 21st century we will have progressed artificial intelligence software (and hardware) technology to the point where human-level performance (and beyond) will not only become possible but will be highly likely.

Each generation of artificial intelligence will assist in the development of the next generation (this is what happens even now), and this development process will accelerate as the need for external human assistance in the design process becomes less with each advancing generation of AI, until AI can do the development all by itself. Once AI can develop its own next generation without human assistance the pace of progress can become very rapid indeed, and if external resources such as materials and energy were unlimited (which they are not, of course!) then the pace of progress would become so large that it would seem to be infinite (although it is not, of course!).

Hence the use of the phrase "The Singularity", because it marks a fairly sharp transition between the human intelligence that dominates now and the advanced artificial intelligence that will exist (I hesitate to say "dominate"!) in the future. The AI technology that emerges from The Singularity will be thinking thoughts that we will not be able to accommodate within our limited-ability biological brains, so making predictions about the post-singularity era is fraught with difficulties. There is a more detailed discussion of the term "The Singularity" given here.

What sort of things would have to happen in order to make The Singularity (or a smoothed-out version thereof) possible?

  1. We would use an advanced form of nanotechnology to evolve & grow massively parallel fine-grain computer architectures, rather than use the current approach where we design & build every small detail of the computer architecture ourselves. This type of evolution would use an artificial form of DNA to record the long-term state of the evolutionary process, and this type of growth would make use of molecular self-organisation to assemble the computer. This is essentially a fine-grain form of artificial life.
  2. We would use external training to teach the computer what its observed behaviour should be, rather than internal programming to dictate to the computer what its internal workings should be. This training process would involve interaction of the AI with its environment via sensors (e.g. inputs such as eyes and ears) and effectors (e.g. outputs such as touch and speech), and one possible training environment might be Second Life (or something similar).
  3. We would largely remove the artificial distinction that exists between software and hardware, so that each particular behaviour of electrons/molecules/etc in the computer has a unified existence rather than being split up as hardware+software. Currently, the use of programmable architectures has some of the spirit of this unified approach. A useful behaviour that is learnt by one generation could be optionally hard-wired into the next generation (i.e. Lamarkism), but it is not clear that this would be easy to do in a fine-grain architecture that arises through evolution & growth.

An advantage of using our own technology (rather than the outcome of biological evolution) to implement an artificial intelligence is that we can optionally hard-wire some of its behaviour. This could be achieved by "steering" (e.g. selective breeding) the process of evolution & growth that gives rise to the computer architecture in the first place, which could be used to influence the behaviour of the AI in many ways. This offers the possibility of using our human influence to create "nice" advanced AI, but unfortunately the same technique could also be used to create "nasty" advanced AI.

One thing that always comes up in conversation about this sort of advanced artificial intelligence is "do we need it?", or "do we want it?", and so forth. I'll cut straight to the bottom line. The applications of this type of technology are so wide-ranging, the potential for enhancing the quality of our lives is so great, the potential for defending ourselves against an aggressor who might wish to deploy this technology against our better interests is automatically inbuilt (yes, the "arms race" argument!), and so on, that I see no way of suppressing this technology. We have to learn to live with it. If this sort of advanced artificial intelligence is technically possible (and there is no obvious reason why it is not) then it will eventually come into existence no matter how much we try to stall the process.

So I will be watching what happens at the Singularity Summit 2007 with great interest, and I think you should too.

Update: There is now a report and discussion on the presentations at the Singularity Summit at Reason magazine here entitled "Will Super Smart Artificial Intelligences Keep Humans Around As Pets?". I could find only one mention of "nanotechnology", and then only in the context of optimising resource usage (i.e. making things smaller to get more computing done). There was no mention of the clever use nanotechnology, such as using synthetic DNA to evolve & grow massively parallel fine-grain computer architectures, as I discussed above. I find this omission very odd indeed.

Update: Here are links to some live-blogging on the Singularity Summit 2007 to be found on David Orban's blog, which I heard about here on Tommaso Dorigo's blog:

Singularity on the front page (of The San Francisco Chronicle)
Liveblogging the Singularity Summit 2007?
Liveblogging the Singularity Summit 2007 - Day One - morning
Liveblogging the Singularity Summit 2007 - Day One - afternoon
Liveblogging the Singularity Summit 2007 - Day Two - morning
Liveblogging the Singularity Summit 2007 - Day Two - afternoon

Well, I said "So I will be watching what happens at the Singularity Summit 2007 with great interest, and I think you should too.". If what I read in the above live-blogs is even a vaguely accurate report of the sort of discussion that went on at the Singularity Summit, then I am very disappointed by the airing of so many apparently superficial opinions there. Maybe the purpose of the event was to strut their stuff before the adoring eyes of the press, and the meat of the arguments was hidden away behind the scenes. What a pity.

Update: The presentations given at The Singularity Summit 2007 are now online at http://www.singinst.org/media/singularitysummit2007.

No comments: