Archive

Monthly Archives: February 2014

I’m a regular listener to The Skeptic’s Guide to the Universe. It’s among my favorite diversions on my way to and from work. In a recent episode, the topic of AI was touched upon. The general sentiment was that, “AI, as it is currently being approached, won’t work.” There are a few possible interpretations of this sentiment, depending on how you interpret ‘AI’ and how you interpret ‘work’. I’d filed it away in the back of my mind, telling myself that I’d send a mail of some sort (and knowing I’d never really do this), but again I’ve seen the phrase muttered, “AI won’t work.” Maybe it’s the numerous acquisitions by everyone’s favorite search giant that stirred the skepticism. Independent of the root cause, I’d like to try and pick apart the assorted dismissals and jabs at the state of AI.

One of the reasons that AI is associated with repeated and inevitable failure is the repeated redefinition of the goal. Some call this ‘moving the goalpost’. There’s a habit of academics and businessmen alike to take a great algorithmic triumph in AI and then call it simply ‘an algorithmic solution to a problem’, separating the victory from the subfield of AI and re-associating it as a triumph of computer science. Google Search is a great example. Google has released various statements saying that they do very little machine learning in their search business. I think that’s entirely missing the point. The fact that a person can type a human language, have a machine comprehend that well enough to retrieve similar documents, then prioritize those documents**, and return them to the end-user is a fantastic achievement. “Well,” remarks the naysayer, “That’s not really human intelligence. It’s not really understanding the query.” On the contrary, this behavior requires enough understanding to return articles for ‘crop dusting’ which are relevant to the process of introducing assorted particulate additives to one’s crops rather than the identically titled process of removing particulate matter from crops. (Dust, the marvel of the English language, has two meanings when used as a verb which are exactly opposite one another.) The qualm about the lack of ‘true comprehension’ of a query still exists, but this qualm is one part Nirvana fallacy (comparing something existing and imperfect to something nonexisting and perfect) and one part No True Irishman fallacy (“It’s not _real_ comprehension.”). The fact of the matter is, if something demonstrates intelligent behavior, limited or not, that’s artificial intelligence.

“How do we make a computer program that decides what it wants to do?” A perfectly valid question, though it evokes in me a bit of spite. It feels almost hypocritical, since there exists no person who knows how he or she decides what he/she wants to do. “Oh! I decided to write because I like writing!” That’s fantastic, why do you like writing? “It’s pleasant to me.” Why is it pleasant to you? I can keep asking the questions until we finally get to, “It just is.” Eventually we’ll hit that bedrock of metacognition — the underlying wires where we can’t see without alternative tools. Right now, we’re building all over the place. At the highest levels, calculation and logic, we’ve got expert systems. At the lowest levels, we’ve got randomly initialized neural networks which do things with little basis other than the random noise with which they’re initialized. In much the same way, it’s hard to get the finer details of the random noise responsible for the things which inspire in us the varying degrees of pleasure and pain. The best answer I can give to the question is this, “We create programs that decide what they want to do through tedious experimentation, reflection, and review” just like we’re doing now.

The simple reality is that AI is improving, gradually and meticulously. Each year we learn more things that work and that don’t work. Science of all kinds very rarely has revolutionary leaps forward, and computer science, like all others, advances through small, meticulous steps and lengthy review. A nonzero number of people have said that the AI field is going in the wrong direction, that we should be focusing on understanding a brain, or that we should be focusing on simulating one. These goals are not mutually exclusive in the least, nor are they detrimental to one another. If anything, having working models and ‘good enough’ AI systems which help us decide the kind of movies we’d like [Netflix], or help us organize pictures [Picasa], or help us find photographs [Bing], or recommend music [Pandora] also help to keep alive the passion that we can do better.

** Admittedly, Page rank is ‘just’ a Markov Random Walk and Latent Semantic Indexing is ‘just’ a clever vector multiplication/dot product. Still, finding patterns in a lot of text documents is very intelligent, wouldn’t you say? Especially if it’s capable of handling hyponyms and polysemy.