No Comments

Scientific Creativity: How to Get More

Scott Fahlman,   February 12, 2011
Categories:  AI    

In an earlier article, I sketched a mini-theory of human scientific creativity – a theory that, I believe, is in principle implementable in an AI system.  I also mentioned that, if this theory is (more or less) correct, it may suggest some techniques that we humans can employ to increase our own scientific creativity.  In this article, I will try to spell out some of these techniques.  I don’t want Knowledge Nuggets to become a “self help” blog – I just think it’s interesting to see where some of these theories lead us.

Let’s begin by reviewing the key points of the theory presented earlier:

  1. What we call “scientific creativity” is not magic.  It’s just good, effective problem solving that happens to lead to a surprising result.
  2. A “flash of inspiration” – the part that seems creative or magical to us – is basically a recognition that the problem fits (or almost fits) some representation or metaphor or recipe already stored in our memory.  This approximate matching is computationally very demanding, but it uses our parallel recognition machinery – considering many possible matches at once – so it feels like a flash.
  3. These flashes of inspiration hardly ever occur until you’ve done a lot of work to investigate and understand the structure of the problem you’re grappling with.  And, once the flash has occurred, it is of no value until you have done all the detailed “grind it out” work to fit your idea to the problem and verify that it works.  This part is perceived as hard mental work – the “99% perspiration” of which Edison spoke.
  4. For a problem that is difficult, important, and generally recognized, many smart people will already have worked on it.  So all the obvious things have been tried, and they didn’t work (or didn’t work well enough).  To solve the problem “creatively” will require you to come up with a new approach.

So, if you accept these ideas, what techniques do they suggest?

Two non-solutions

Let’s dispose of two ideas that usually don’t work:

Do what everyone else is doing, only more so. That is, work harder, work longer, explore more alternatives, or bring more resources to bear (people, computing cycles, data, or whatever).  If you really do have access to a unique level of resources, this can sometimes work.  In fact, it might be the only way to solve certain big, ugly problems.  But even if you succeed in this way, people are unlikely to recognize your solution as “creative” – it’s just “brute force”. [1]

Try a lot of things at random. Again, this may occasionally work, but the odds are very much against you.  For interesting, hard problems, the useful answers will be very sparse in the space of possibilities, and all the obvious things will have been tried already.  Probably your only hope is to use some kind of knowledge or model in choosing what new alternatives to consider, rather than wandering around without any plan or guidance.

Cultivate your stock of metaphors.

The most creative people are generally the ones who seem to be interested in everything.[2] And it’s a special kind of interest: not just collecting trivia (though some very creative people do that as well), but trying to figure out how everything works.   If the tides are caused by the pull of the moon, why are there two high tides every day instead of just one?  What is the trick that allows a stomach to digest meat, when it’s made of meat?  When a 6-way symmetrical snowflake is forming in the atmosphere, how does one branch know what pattern has been chosen by the other branches?  What’s really going on with the “casting out nines” trick for checking arithmetic?  The more you ponder such questions, the larger will be your stock of metaphors and models.

These curious-about-everything people can sometimes seem rather odd to bystanders, but most of them don’t care – some even revel in that oddness.  An example: one day, out of the blue, Marvin Minksy speculated that there are no parasites that eat hair because, after biting through the hair shaft, the parasite would have no local way of knowing which of the two pieces to hold onto if it wants to remain with the host.  Whether that’s true or not – it doesn’t really matter – that thought has stuck with me, and I’ve used it as a model a few times in thinking about data structures and distributed processing.

Collaborate. A great way to multiply your effective store of metaphors is to collaborate with someone.  Ideally, you want someone that you can communicate with easily, but whose background is different from yours: different training, different generation, different style of approaching problems, or whatever.  The history of science and technology is full of creative breakthroughs made by two people working closely together.  Larger groups provide even more diversity of ideas, but with more than two people it can be hard to maintain the very close communication that truly collaborative problem-solving requires.

One collaboration model that is widespread and often successful is the professor and the grad student.  This is the academic equivalent of the old master/apprentice model.  The conventional view is that the professor has deep knowledge of the field and its accepted ways of doing things, while the student provides enthusiasm, a fresh point of view, and often the creative spark.  As the saying goes, the student “doesn’t yet know what is impossible”.

But in my experience, it just as often works the other way.  The students have recently taken courses and are full of all the latest knowledge, but it is a rare student who has the self-confidence to venture very far out of the box.  Those students who do have that self-confidence, the skill to exercise it successfully, and the wisdom to occasionally listen to their elders, are the ones who end up as faculty in the top universities – or, in more applied fields, as successful entrepreneurs.

Relax reality – temporarily!

We hypothesized that the “flash of inspiration” is really a recognition that the problem you are grappling with matches, more or less, some metaphor, template, or recipe in your bag of tricks.  But that “more or less” can cause problems: every problem is different and you generally don’t get an exact match.  Sometimes you have to find a near-miss solution and massage it a bit to fit the problem you’re working on.  But if it’s not a very close fit, that flash of recognition may never occur.

An alternative approach that sometimes works is to modify the problem and the rules – perhaps even the laws of nature – and see if you can solve this modified problem.  Then you must find a way to massage that unrealistic solution back into something you can really use.

An example:  One day in the mid-1970s Marvin Minsky offered the following challenge to the grad students in the MIT AI Lab (I was one of them):  “Suppose you had an unlimited hardware budget.  You can have as much hardware as you want, but it has to be well-defined hardware – no magic boxes.  Your goal is to solve (or partly solve) some big problem in AI.  What would you ask for and how would you use it?”

At the time, I had been thinking about the core problem of recognition: you have a bunch of features and expectations, and you want to find the stored description that best matches these inputs.  So it occurred to me that we could build a little hardware recognition-box for each stored description.  As input features arrive, we broadcast them to all of these boxes at once.  Each box keeps score, asking “Is this me?  How well do I match?”, and one of them ultimately emerges as the winner.  I kept thinking about that model, and it gradually evolved into the NETL architecture, which could handle both this recognition task and simple inference in a knowledge base.  And that became my Ph.D. thesis.

Of course, eventually you have to get back to reality.  I was not given an unlimited hardware budget, so this model could not be implemented as it was.  But it led to a lot of knowledge-representation and recognition ideas that have been used in other systems, and that today, decades later, form the basis of my Scone implementation.  So by temporarily setting aside some real-world constraints, I was able to gain some deeper insights into the problems I was grappling with.

Scientists do something very similar when they develop a simplified model and then put in the real-world complications.  For example, Galileo and Newton developed their dynamics by postulating an ideal world without friction, and verifying their models in minimal-friction settings such as billiard tables.  (Planetary orbits are essentially friction-free so they provided another way to test the simplified theory.)  Then they and their successors put the friction back in so that they could model more complex real-world situations, such as the flight of cannon balls.

Do your homework, but don’t be captured by it.

It follows from points a and c above that it is very rare for someone to make creative discoveries in a field if they don’t have a reasonably solid knowledge of that field.  Lucky accidents and flashes of inspiration are all well and good, but you will be very inefficient if you don’t have the knowledge and skill to determine whether your brilliant (or lucky) idea has some chance of working.

Even if your idea is a great one, it is not going to change the world unless you have the skill and perseverance to work out all the details and prove that it works.  The world is full of people who “invented” something, but didn’t follow up, and then had to watch while someone else reaps all the glory – and sometimes riches – for what seems to be the same idea.  Well, if you don’t follow up (or collaborate with someone who will), it doesn’t count.  And if you don’t have the skills to follow up efficiently, you will waste a lot of time chasing wild geese.  So it’s important to master the “conventional wisdom” in a field – or a good part of it, at least – before you try to innovate.

However… If a problem has been around for a while, and it is generally understood that it’s important, then there’s probably something wrong the “conventional wisdom”.  It might be a big flaw or a seemingly tiny one, but if the conventional approach actually worked, the problem would already have been solved.

So it’s important to learn what everyone else “knows” and what has already been tried, but not to accept it all at face value.  Be alert for things that are stated dogmatically without a good reason.

Don’t look where everyone else is looking.

If everyone else is looking at a problem in a certain way and is applying a certain set of tools and techniques, the best chance of finding a creative solution is to try something else.  Here are some techniques for doing that successfully.

Find a new problem. If you identify some new problem that needs to be solved, that may be creativity enough.  If nobody has looked at this problem before, it’s quite possible that simple, well-known techniques will be sufficient to solve it.  Many useful (and often lucrative) inventions have been created in this way.  A silly but illustrative example: someone says, “Gee, for some people it’s a real hassle to get up and turn the lights on and off – what if people could just clap their hands?”  The electronics of the day might or might not be up to the task.  Even if they are, it might require marshalling resources and conducting a program of research.  But it may not require much creative thinking to solve this problem, once some creative person has recognized the need.

Similarly, in science, a lot of discoveries have been made by people who are the first – or among the first – to ask a new question.  One of the best ways to do this is to look for anomalies – observations that don’t quite fit the current theories.  It’s easy to write off most anomalies as experimental error, or as simply being unimportant, and most busy researchers will do just that.  But until such anomalies have been explained, they may contain the seeds of important new questions.  So pay close attention to the little mysteries.

As Isaac Asimov once observed:  “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny …’ ”

Make sure the problem can be solved – and maybe that will give you some clues about how to solve it. This is an important technique in AI and in some other areas.  As I observed in an earlier article, some AI researchers focus on problems that most human brains can solve, while others focus on problems that are beyond the capacity of any unaided human.  For problems of the first kind (e.g. understanding natural language, whether spoken or written), we humans serve as a “two-legged existence proof” that the problem can be solved by some sort of physical information-processing device.  So those of us who work in this area are not wasting our time by trying to solve an inherently impossible problem, though it might be impossible with today’s technology and today’s ideas.

The more we can learn about how this “existence proof” works, the more clues we will have about how to solve the problem.   I should say “one way to solve the problem”, since there may be other ways.  But if there’s only one existence proof, it’s good to understand as much as possible about that one.  From neuroscience we know that the problem of understanding natural language and speech can be solved using a very large network of millisecond-speed components, all running in parallel – that’s what the brain seems to be.  It doesn’t seem to require nanosecond-speed logic circuits, and it probably doesn’t require a lot of floating-point arithmetic, since we see no evidence of floating-point hardware in the brain.  So the “two-legged existence proof” has given us a few clues about what a solution might look like.

Ask the question a different way. Discovering a new problem to solve is fine, but sometimes you want to solve a particular problem or answer a particular question, rather than finding a new one.  If everyone else is asking this question in a certain way, maybe that’s what is holding them back.  The way you frame a question usually carries certain hidden assumptions.

Gerry Sussman, my Ph.D. research advisor at MIT, had a favorite response: whenever I would ask him what he thought was the best way to solve problem X, and he would ask, “What is the problem of which this is a sub-problem?”  In other words, stand back and think about the larger problem.  Maybe the question you’re asking is not really the one you need to answer.  If you attack the larger problem in some other way, it might be easier.  It’s very common for a field to become fixated on a certain way of posing some important problem, and never to consider whether that is the problem that they really want to solve.

Think about what has changed. The world changes, and that creates new opportunities for researchers.  Sometimes an approach that was impossible a few years ago is a good approach today.  This is especially true in computer science, where Moore’s Law and a steady flow of inventions changes the game every few years.  None of the apps running on your smart phone would have been possible with the technology of ten or fifteen years ago, and they certainly wouldn’t have fit into your pocket.

Faster machines, larger memories, and more pixels are one engine of change, but there are many others: new data sources, including all the information now available on the internet; new theoretical and analytical techniques; new software tools; better instruments, new materials…

Before the Wright Brothers developed their airplane, there were many false starts by others, but heavier-than-air flight was just not going to happen until there was a power source with an adequate power-to-weight ratio – the internal combustion engine finally solved that problem.  So the Wrights attacked the right problem at just the right moment – and it didn’t hurt that, as bicycle mechanics, they had the skill-set to try out their ideas.

In science, one very important kind of change is the development of a new way to look at what’s going on.  Time and again, some new visualization technology has led to an explosion of scientific discovery: the telescope, the microscope, X-rays, spectroscopy, high-speed strobe photography, satellite sensing of the earth, functional MRI of the brain in action…  The list goes on and on.  Each of these new visualization technologies created an exciting opportunity for the first researchers to exploit the tool in new ways.  Galileo didn’t invent the telescope, but he was one of the first to point it at the heavens.  That led to a number of revolutionary discoveries that changed our understanding of the universe.

Today, ever more powerful computer simulations tied to graphical displays are providing a similar opportunity.  Through simulation, we can “see” natural and synthetic phenomena that we could never visualize before.  And new analytic tools, such as plotting webs of inter-personal connections, are giving us new insight into large data sets that were just unreadable printouts in the past.  Opportunity beckons.

Take a close look at past failures. It is often worth revisiting old failures and thinking hard about what really went wrong.  When some effort succeeds, there’s not much need for post-analysis.  It worked.  Yay!  End of story.[3]  But when an effort fails or falls short of its goals, there are many possible reasons.  It’s easy for people to draw the wrong conclusions, and easy for these wrong conclusions to harden into a consensus – our old friend “conventional wisdom”.

A project might have ten good ideas and one really bad one, and it fails because of that one flaw.  This doesn’t mean that all the ideas were bad, or that the effort was hopeless and should never be tried again, but it may be hard to assign the blame correctly.  Or perhaps all the ideas were good, but the people on the project didn’t execute it well.  Perhaps the project was managed in such a way that good ideas and a good technical effort was crippled.  Perhaps funding was cut just as success was within reach.  All of these things happen, and very often the “conventional wisdom” enshrines the wrong diagnosis.  Or perhaps the project was indeed doomed to failure at the time, but the world has changed since then, and the approach that failed earlier could (with a bit of tuning) succeed now.

So old failures are a very rich source of “almost-right plans”.  Instead of abandoning these plans, it’s worth trying to debug them – that is, to figure out what went wrong, fix it, and try again.[4]  This can be a very good way of breaking away from the crowd – sometimes the creative “new” idea is actually a recycling of a creative old idea that others have given up on.  Sometimes those involved in the earlier project know very well what went wrong, but they may not be listened to; sometimes it takes an outsider, with no emotional investment in the earlier project, to understand what really happened.

Cross boundaries. One great way to break away from the pack in your own field is to sneak across the border into someone else’s field – for example, crossing from computer science to some area of biology.  A surprising number of discoveries are made by people who show up in a new field with a set of tools, skills, and ways of looking at things that are very different from those employed by the natives.  If the immigrant is trained in some scientific or engineering field, much of that general training will carry over, but the immigrant will have a rather different set of metaphors to draw upon.

But as we discussed earlier, you are unlikely to have much success until you understand the core knowledge of the field you are working in.  Some people will move into a new field that they have been interested in all along, so they will have a head start in acquiring the necessary knowledge; others succeed by just working very hard for a year or two.  One interesting shortcut is to develop a close collaboration with someone who is well established in your new field – they can serve as a guide and, for a while, as a critic.

One last thought…

The suggestions in this paper may help you to approach scientific and engineering problems more creatively.  But if you apply them too aggressively, you may be regarded as a crackpot – or you may find that you have become a crackpot.  So strive for greater creativity, but try to keep your balance.  Show some respect for those who stick to the conventional paths – they are conventional for a reason – and be sure to visit reality from time to time.

---------------------------
  1. Of course, it may require a lot of creative problem-solving to assemble an unprecedented level of resources, but that’s often not appreciated. []
  2. Obviously, I’m generalizing here from the creative people I happen to have met.  But if you’ve spent your entire adult life working in places like Carnegie Mellon and MIT, you will have had the opportunity to observe a large number of scientifically creative people in action, including some whose creativity is legendary. []
  3. Of course, you might want to work on ways to make a good solution better. []
  4. This idea of “debugging almost-right plans” has been championed over the years by Gerry Sussman at MIT. []

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>