2 Comments

  1. anon September 15, 2009 @ 12:38 pm

    First, when one of these super-human technologies takes off, it creates a sort of gold rush that attracts a lot of talent and resources away from the core problems of AI. In recent years, it seems that 80-90% of the people at the big AI conferences are working on super-human AI problems, not on human-like AI. So it is little wonder that progress on the core problems has slowed down.

    I don’t see how this could be a problem, as long as these narrowly-defined, optimal or near-optimal technologies are spun off to domain experts ASAP. Let optimal planning papers be published in operations research journals, papers about chess/poker-playing programs in computational game theory journals, and papers about statistical inference in statistics journals.

  2. Scott Fahlman September 15, 2009 @ 1:43 pm

    “Anon”,

    I think there are two problems with your strategy. First, I think that it would be a shame to split up AI into a lot of small sub-disciplines if we can avoid it. I think that the various communities within AI have a lot to learn from one another. These communities could be mutually supporting, as long as we can maintain some level of mutual respect and get over the idea that not-so-formal work on human-like AI is somehow confused and second-rate. Many of the specialists in super-human kinds of AI still dream about solving the larger problem, but they are frustrated and have turned to sub-areas of AI that offer more immediate practical results.

    Second, those of us working on human-like AI are not in a position to throw out all the super-human specialists, even if we wanted to. They are a large majority of the field now, and have been for some time. So all we could really do is secede. Some have done this, renaming their new group “Artificial General Intelligence” or AGI. It’s unclear whether this splinter movement will thrive. (It’s interesting to me that a number of papers at recent AGI conferences talk about the need for a new mathematical/theoretical foundation to enable forward progress in AGI. To me, that’s the sort of thinking that got us into the current situation, but I wish them well.)

    There is also a lot of work on “biologically inspired” AI. That’s not quite the tack I would take — I think it’s worthwhile to study human-like general AI without necessarily focusing on brain modeling — but it is one respectable way to rule out narrow work on super-human topics. There have been a couple of successful workshops (not narrowly focused on brain modeling, despite the label) at recent AAAI Fall Symposia.

    – Scott

Human vs. Super-Human AI

Scott Fahlman,   September 7, 2009
Categories:  AI    

Note:  A revised, updated, and slightly expanded version of this essay has been published in the inaugural issue of the new online journal, Advances in Cognitive Systems, or ACS:

Fahlman, Scott E. (2012): “Beyond Idiot-Savant AI” in Advances in Cognitive Systems 1, pages 15-22.

As for the photo, it doesn’t really have anything to do with the topic.People seem to like a bit of eye-candy in the blog, just for variety.  I took this photo a few years ago in MarwoodHillGardens, near Barnstaple in England – a highly recommended garden, by the way.

What this article is about

Despite the title, this article is not about super-intelligent, autonomous AI systems that might attempt to take over the world and that, if they succeed, might or might not decide to keep us humans around as pets.There has been a certain amount of discussion about this in recent months, triggered in part by an AAAI panel set up to look at such issues – and in part by a lot of sensationalized press accounts.Nothing sells papers like the threat of killer robots run amok.

I agree that those of us working on AI have a responsibility to consider the long-term human consequences of our work.Fortunately, we’ve got some time to think about this.AI systems are still a very long way from achieving even a child-like level of common sense and general planning ability.This article discusses one reason why progress has been so slow.

Slow progress toward the original goal of AI

If measured by the number of useful applications, tools, and vibrant spin-off fields it has produced, Artificial Intelligence has been a spectacular success.  However, a lot of people (including me) believe that AI has been a disappointment in terms of achieving its original goal: to understand and, ultimately, to replicate the computational mechanisms responsible for human-like intelligence, in all its generality, flexibility, and resilience.In an earlier article I listed some of the major elements of intelligence that we still don’t understand after 50+ years of work on AI.In another article I echoed Ron Brachman’s call for continuing work on an integrated architecture for AI.

Back in the early days of the field, we seemed to be making good progress toward this goal.  There were a number of key discoveries along the way: first, that computers could manipulate symbols as well as numbers; second, that search through a space of possibilities, with occasional backtracking, was a powerful and resilient way to solve many problems; third, that human-like performance is going to require a lot of knowledge, not just search-power; fourth, that it’s too tedious to assemble and organize by hand enough knowledge for broad, general intelligence, so we had better find ways to increase our store of knowledge by learning.  But somehow, since the mid-1980′s, progress toward this central goal of AI seems to have run out of steam.

Why is that?  Well, one explanation is that the funding climate changed.In the old days, there was steady, long-term support for research on the central problems of AI – not an enormous amount of funding, but enough to enable a small community of AI researchers to focus on the most challenging fundamental problems.  This effort attracted some of the most brilliant minds in the field of computing.But times changed.Sponsors lost patience with basic, long-term AI research; they began demanding a focus on specific applications, with constant benchmarks, competitions, “go/no-go” decisions, and short-term deliverables.  The patient, curiosity-driven funding that characterized the early days of AI is now very rare.

But that is only a part of the story.I think that we face a more fundamental problem: in an odd way, AI has been a victim of its own success.  More specifically, the field’s success in producing useful but narrow technologies in particular areas – what I call “super-human AI” – has almost completely crowded out work on our original goal of creating flexible, integrated, human-like AI.We have seen one gold rush after another to exploit new, highly specialized technologies with their roots in AI.In the short run, this may be good for the field, since it pulls in both people and money; in the long run, I think it’s a serious problem.

“Super-Human” AI

What do I mean by “super-human AI”?  I wrote briefly about this in an earlier article.  The idea is that intelligence is really a bundle of many capabilities.It is possible (and is now very common) to have super-human performance in one of these areas, or a few of them, without having anything that resembles the breath, resilience, and resourcefulness of “merely human” intelligence.[1]

There are many examples of narrow super-human AI, but the story is similar for each.First, researchers grappling with some important problem within AI try a variety of approaches, inspired to some degree by the questions “How do humans perform this task?” or “What is really required to achieve human-like performance?”  And then someone comes up with an elegant mathematical approach that, under certain conditions and with sufficient computing power, can produce results much better than an unaided human.  In many cases, this leads to a commercially valuable technology.In some cases, it gives rise to an active field of investigation that takes on a life of its own, attracting many researchers, lots of funding, and spawning its own specialized conferences and journals.

There are many examples of these super-human AI technologies: computer algebra systems that can solve integrals that no unassisted human can handle; search-intensive chess programs that can consistently beat (almost) every human player; search engines that can browse and index the entire Internet, but without any understanding of the content; statistical machine translation systems that can produce useful (if imperfect) translations without ever considering the meaning of the text; statistical data-mining programs that can extract subtle regularities from a mountain of noisy data; poker-playing programs that employ powerful techniques from game theory; optimal or provably near-optimal planning systems; theorem-proving inference systems, with their guarantees of soundness, logical completeness, and provable consistency; and statistical inference systems that (if their models and input probabilities are correct) can very precisely infer the probabilities of various outcomes in a way that no unaided human can match.

This is great, but in every case (so far, at least) these developments contribute little or nothing toward achieving our original goal.The super-human techniques apply only to a very narrow set of problems, or the assumptions underlying the mathematical model are unrealistic in practice, or the method is too computationally demanding to be used on large problems – often problems that we humans can solve easily using our more informal approaches.Or all of the above.All of these systems are impressive, and many are commercially valuable, but none of them would be called intelligent, in the normal sense of that word.None of these systems can begin to match the common sense or flexible problem-solving ability of a young child.

An Example

To understand what’s going on here, let’s look at one of these areas – the evolution of AI planning and problem-solving systems – in a bit more detail.A lot of the early work in this area took an intuitive approach, informed to some degree by introspection about how we humans approach complex planning tasks.

The first problem was to represent the universe in which the planning is to take place, the allowable set of operations, and the preconditions and effects of each operation.(We still have not completely solved these representation problems, but that’s a topic for another article.)Given an adequate representation, the next problem is how to find a legal path from the current state to the goal.Sometimes a legal path is easily found; sometimes it requires a great deal of search and non-obvious application of the available operators.

Ideally, we would like both a reasonably efficient plan and a reasonably efficient planning process.One powerful idea is hierarchical planning: first, use high-level, abstract operators to sketch the outlines of a plan; then use more specific operators to fill in the details.Another powerful idea is to save a sequence of operations that is useful in one context, generalize it a bit, and to turn the sequence into a “macro-operator” that can be used in other problems.

These ideas were explored extensively in the early days of AI by systems such as GPS[2] , STRIPS[3], ABSTRIPS[4], SOAR[5], and many others.My own BUILD program[6]   (my MIT master’s thesis from 1973) was typical of early work in this area.  BUILD tried to figure out a plan by which a (simulated) one-handed robot could build a specified structure on a table, given a collection of blocks.  BUILD could be quite resourceful: it would first try a straightforward approach, placing the blocks one by one, starting from the bottom of the desired structure and working upward.  But if the desired structure was unstable during the construction, it would consider more complex plans.  It would try to use other blocks as scaffolding or temporary counterweights, and if that didn’t work it would try to build a sub-structure on the table and then lift the whole sub-assembly into place.  BUILD would do some extra work to produce good plans – for example, it would eliminate redundant steps in the plan – but its plans were by no means optimal, and were never intended to be.  It just returned the first reasonably good plan that worked.In that respect, it seemed very human-like in its planning.

Not long after BUILD was published, the AI planning field changed radically.  Methods were developed that, for a certain limited class of problems, would guarantee optimal results, or results that were provably close to optimal.Other things being equal, that’s good: who wouldn’t prefer an optimal solution over one that is merely “good enough”?But, of course, other things were not equal.The optimal planning programs were very computationally demanding because the programs had to consider every possible solution – or formally exclude some parts of the search space where no optimal solution could possibly be hiding.For may problems of interest, these techniques were computationally intractable, or at least impossibly inefficient.So these techniques were limited to small problems in very clean, easy-to-model domains.In the real world, it makes little sense spend a lot of supercomputer time seeking an optimal solution to real-world problems when a single pothole – not represented in the model – could force the whole planning process to be re-run.(If you really care about optimality, a local patch to the plan isn’t good enough.)And, as all Pittsburghers know, potholes are everywhere.

Given these limitations, some of us felt that the obvious move would be to continue work on flexible, resourceful, trainable, “good enough” planning systems.After all, we humans don’t worry about optimal planning in our daily lives.”Good enough” planning is good enough for us, and we can show great cleverness and resiliency when things go wrong at execution time – as they so often do – forcing us to re-plan on the fly.We can even pass partially instantiated plans from one person to another via informal high-level recipes:”To get from CMU to the airport by car, take Fifth Avenue to the Parkway East (heading west), cross the Fort Pitt Bridge, and just follow the ‘Airport’ signs from there.”

But the idea of optimal or near-optimal solutions, built on a sound and elegant mathematical foundation of theorems and lemmas, was too alluring to pass up.Since the mid-1980′s, the planning field has been dominated by this approach.Most of the papers at planning conferences focus on how to deal with the resulting intractability, so that at least some problems of practical interest can be addressed. If an optimal solution is infeasible, you at least need to prove something about how close your technique can come to the optimum – impossible in most messy-real-world planning domains.So it is now difficult to publish planning results that do not address optimality concerns, and several generations of students have learned to take this approach for granted.Not only has a super-human sub-field of AI been spawned, but work on more human-like approaches to planning has mostly shriveled and died, unable to thrive in the shade of this mighty oak.

The Problem

And that, I think, is the problem.AI is one field with two very different sets of goals.It would be healthy for the field if these two approaches could co-exist: one set of researchers working on various super-human areas of AI and another set working on the original core problem of broad, human-like intelligence.These efforts could reinforce one another, and some people would move back and forth between them in the course of a career.  Unfortunately, it seldom works out that way, for two reasons:

First, when one of these super-human technologies takes off, it creates a sort of gold rush that attracts a lot of talent and resources away from the core problems of AI.  In recent years, it seems that 80-90% of the people at the big AI conferences are working on super-human AI problems, not on human-like AI.  So it is little wonder that progress on the core problems has slowed down.

Second, researchers in some of these super-human areas develop a certain contempt for the less elegant human-like approaches in the same or neighboring areas: their own work is based on elegant mathematics and clean abstractions – the approach is scientific and principled – while those working on less formal approaches to human-like AI are just messing around.“That’s the sort of thing we did in the old days, before we understood how to properly frame the problem.Anyone still messing with those ad hoc approaches must be doing so out of ignorance, unaware of all the amazing progress that has taken place in AI.”

Well, OK, there has been amazing progress, and we should build on that whenever we can.But in most cases,we’re not talking about the same area of research.There is a place for optimal planning, but we also need to understand human-like good-enough planning, which is faster and much more flexible. There is an important place for theorem-proving, but (as I have argued elsewhere), we need something more quick-and-dirty if we want our systems to read the daily newspaper.And so on.

Unless and until these super-human approaches can be extended to cover the kinds of large, messy, hard-to-formalize tasks that we humans handle with such aplomb, we have to keep working on these things, by whatever scruffy means are necessary.Maybe some of these problems can be handled by techniques that will ultimately be formalized and wrapped in elegant theory, or maybe they are inherently messy, but any reasonable person must admit that AI still contains many challenging problems that don’t fit into the elegant theoretical frameworks we have today.

One might argue that these super-human techniques are more valuable than understanding and emulating human-like intelligence.  After all, we already have plenty of humans, so why not just focus on the areas where machines can extend human capabilities?  I think there is some merit in that argument, but it would be a shame to let the scramble for super-human capabilities crowd out the quest for human-like AI.

The quest to understand and replicate human-like intelligence remains as one of the great intellectual challenges of mankind – one of the last great mysteries.  Yes, this problem has proven to be more difficult than we thought it would be, and the solution is unlikely to rest on a foundation of clean, beautiful mathematics, but that should not discourage us.  If, along the way to understanding intelligence, we can create some valuable technologies that provide super-human performance in specific narrow domains, that’s a bonus.  AI as a field may pause occasionally to take advantage of these new technologies, but we should not let them divert us from the ultimate goal.  Better yet, we can combine these technologies to get the best of both worlds: flexible, resourceful human-like systems with a “telepathic” link to an array of super-human tools, for the times when those tools are applicable.

And then we can go back to worrying about the death robots.;-)

---------------------------

  1. Note that the word “merely”, as used here, is meant to be read in a voice dripping with sarcasm. If we could somehow develop an artificial system with “merely human” performance, that would be one of the crowning intellectual achievements of mankind – infinitely more important than making incremental progress in some particular subfield such as data mining or optimal planning. []
  2. Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256-264. []
  3. R. Fikes and N. Nilsson (1971). STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189-208. []
  4. Sacerdoti, E. D., “Planning in a Hierarchy of Abstraction Spaces,” Artificial Intelligence, 5:115-135, 1974. []
  5. John Laird, Paul Rosenbloom, and Allen Newell (1987). “Soar: An Architecture for General Intelligence”. Artificial Intelligence, 33: 1-64. []
  6. Fahlman, S. E. (1974), “A Planning System for Robot Construction Tasks”, Artificial Intelligence 5 (1974), 1-49. Tech Report version available online. []

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>