September 18, 2005

Is there a Matrix in your future?

Today's discussion with the Rationalists was quite interesting, if at times mentally intense. The questions was whether a robot can have a mind. I learned about crystalline protien structures embedded within neurons and the importance they play in the mechanical production of thoughts in the human brain. Amazing stuff. Somewhat assuringly (perhaps because I've been exposed to too many dysfunctional visions of the future involving artificial intelligence in science fiction), we were assured that true artificial intelligence is generations in the future.

Of most interest to me -- both from a "what will the future bring" perspective as well as a more humanistic concern -- was the Turing Test. Here's how the test works: imagine that you are at your computer, getting Instant Messages from two sources. You know that one set of IM's is coming from a human being, and the other set of IM's is coming from a computer responding to you autonomously (that is, without a programmer or user there; it is operating only on its own programming and algorithims). You have a certain amount of time -- five hours, say, to ask both "persons" whatever questions you want. If, at the end of that time, you cannot distinguish between the real human being and the computer, then that computer is so functionally similar to real intelligence that as a practical matter, there is no need to split hairs any further.

There are some significant logical problems with such a test, for purposes of identifying whether you are dealing with true artificial intelligence (amusingly, the test may be more effective at measuring the intelligence of the interrogator than either test subject). But I thought the test does reveal something else important from a philosophical perspective.

What questions would you ask if you were interrogating a computer in order to determine if it had achieved true intelligence? What kinds of subjects would you want to discuss? Theology? Ethics and morality? Phenomenology? The kinds of questions you ask will reflect what you think gets to the essence of what it is to be human.

4 comments:

Anonymous said...

While I don’t know when we will have AI close to our intelligence, but two generations seems to assume that the march of AI will be an incremental process. This is not necessary the case. The metaphor is of a phase shift. Such as the shift that occurs when ice turns to water or water to steam. One moment one state, the next tiny change, a totally different state. That could occur with AI when the software begins to program itself. At that point, we could have AI very quickly.

I can’t wait either, just imagine, computers and robots to take care of the nuisances of life.

Burt Likko said...

Yes indeed. Colossus will take care of little things like government and law, and SkyNet will take care of troublesome military activity.

Anonymous said...

We get the SkyNet ref, but not "Colossus" ... unless you're referring to the anatomical location for which a certain combover-sporting insurance coverage partner tried to get that going as a nickname (unsuccessfully, at least during my tenure). But I doubt we'd have to worry about world domination from that entity, as it couldn't even take care of the first two Mrs. F's ...

Or maybe the Sylvia Plath poetry collection?

Burt Likko said...

No, my Norwegian friend, Colossus is a real movie reference. During the Cold War, an American super-computer built to manage nuclear weapons links to its Soviet counterpart; the two exchange enough information that they become a single sentient entity, and assume dictatorial powers over the world, threatening nuclear annihalation if humanity fails to obey its dictates.