Monday, October 18, 2004

The Next Breakthrough

The notion that the world around is continuously evolving is obvious; we rarely grasp its full implications. We do not ordinarily think, for example, of an epidemic disease changing its character as the epidemic spreads. Nor do we think of evolution in plants and animals as occurring in a matter of days or weeks, though it does. And we do not ordinarily imagine the green world around us as a scene of constant, sophisticated chemical warfare, with the plants producing pesticides in response to attack, and insects developing resistance. But that is what happens too.
In the past few years, artificial life had replaced artificial intelligence as a long term computing goal. The idea was to write programs that had the attributes of living creatures – the ability to adapt, cooperate, learn and adjust to change. Many of those qualities were especially important in robotics, and they were starting to be realized with distributed processing or DNA computing.
Distributed processing meant that you divided your work among several processors, or among a network of virtual agents that you created in the computer. There were several basic ways this was done –
1) one way was to create a large population of fairly dumb agents that worked together to accomplish a goal – just like a colony of ants worked together to accomplish a goal.

2) Another method was to make a so-called neural network that mimicked the network of neurons in the human brain. It turned out that even simple neural nets had surprising power. These networks could learn. They could build on those past experiences.

3) The third technique was to create virtual genes in the computer, and let them evolve in the virtual world until some goal was attained.

And there were several other procedures, as well. Taken together, these procedures represented a huge change from the older notions of artificial intelligence. In the old days, programmers tried to write rules to cover every situation. For example, they tried to teach computers that if someone purchased something at the store, they had to pay before leaving. But this commonsense knowledge proved extremely difficult to program. The computer would make mistakes. New rules would be added to avoid the mistakes. Then more mistakes, and more rules. Eventually the programs were gigantic, millions of lines of code, and they began to fail out of the sheer complexity. They were too large to debug. You simply could not figure out where the errors were coming from.
So it began to seem as if the rule – based AI was never going to work. Lots of people made dire predictions about the end of artificial intelligence. They started predicting that the computers would never match human intelligence.
But distributed networks of agents offered an entirely new approach. And the programming philosophy was new, too. The old – rules based programming was “ top down “. The system as a whole was given rules of behavior. But the new programming was “ bottom up “. The program defined the behavior of individual agents at the lowest structural level. But the behavior of the system as a whole was not defined. Instead, the behavior of the system emerged, the result of hundreds of small interactions occurring at a lower level. Because the system was not programmed, it could produce surprising results! Results never anticipated by the programmers. That was why they could seem “ lifelike “.
DNA computing involved agent – based programs that modeled biological populations importantly in the real world. Like the programs that mimicked ant forging to control big communications networks. Or programs that mimicked division of labor among termite colonies to control thermostats in a skyscraper. And closely related were the programs that mimicked genetic selection, used a for a wide range of applications. Example of one such application is – witnesses to a crime were shown nine faces and asked to choose which was most like the criminal, even if none really were; the program then showed them nine more faces, and asked them to choose again; and from repeated generations the program slowly evolved a highly accurate composite picture of the face, far more accurate than any police artist could make. Witnesses never had to say what exactly they were responding to in each face, they just chose, and the program evolved.
Most AI systems are very static. Most of them can usually only solve one given specific problem, since their architecture was designed for whatever that specific problem was in the first place. Thus, if the given problem were somehow to be changed, these systems could have a hard time adapting to them, since the algorithm that would originally arrive to the solution may be either incorrect or less efficient. Genetic algorithms (or GA) were created to combat these problems. They are basically algorithms based on natural biological evolution. The architecture of systems that implement genetic algorithms (or GA) are more able to adapt to a wide range of problems. A GA functions by generating a large set of possible solutions to a given problem. It then evaluates each of those solutions, and decides on a "fitness level" (you may recall the phrase: "survival of the fittest") for each solution set. These solutions then breed new solutions. The parent solutions that were more "fit" are more likely to reproduce, while those that were less "fit" are more unlikely to do so. In essence, solutions are evolved over time. This way you evolve your search space scope to a point where you can find the solution.

No comments: