On Intelligence, Hive Minds and the Internet
Steve denBeste has produced a long and thoughtful piece
on the nature of intelligence and the prospects for super-intelligence/artificial intelligence within serial computer (not good), parallel computers (better), computer mediated human interaction (better still). It is a wonderful read and strikingly well informed.
Round about the middle of the piece den Beste discusses the idea of inductive reasoning's relation to intelligence and the perils of the "butterfly effect - that is
Digital simulations of analog systems always include small initial errors, and as digital calculations iterate ever more deeply, that error grows until the error swamps the signal, at which point the digital simulation will have no greater than a random chance of being the same as the analog system it is trying to simulate.
The interesting thing about inductive reasoning is that it is a learned skill. It is essentially the ability to take limited information and come to reasonable guesses as to its implications. Inherent in this process is error. Sometimes in the form of demonstrably false premises, more often tiny misperceptions of particular facts. In either case, humans learn - largely by experience - to work around the errors and to frame their conclusions to acknowledge and take account of the relative certainty of the premises.
For example, the horse betting strategy of always betting on the leading apprentice jockey to show is a good way of losing less money at the track. However, after making a few of these bets a person may begin to add additional rules. In the extreme, "except where the horse he is riding has three legs", more subtly "except where the odds are greater than xx:1. Inductive reasoning is a process rather than a fixed algorithm and, as such, is constantly being tweaked to make it conform to experience more closely.
The great hope of artificial intelligence in the 1970's was to create expert systems which would deduce rules from experience and apply them to new experience. It did not work very well. It was a perfectly good idea but it turned out that actual experts had a "feel" for problems which was not reducible to rules. Analogous to the race track regular's "I don't like the look of that horse." Capturing the subtle effects of that "feel" was beyond the AI programmers capacity.
What strikes me as more promising is the entire notion of genetic algorithms and evolutionary programming
. The basic idea is that you specify a problem and let a set of programs lose each of which try to solve it. They run at the problem for a while and the best programs, the one which solve the problem most exactly or most quickly are found. These programs are then allowed to proceed to the next generation. However, parts of these programs are put together into new programs which are then set at the problem or a slight variation of the problem. Over time, a class of programs, in many cases only distantly related to the starting population, evolve to solve the set problem.
This process combined with very high speed, parallel processing, is potentially very promising when it comes to the creation of machine intelligence. It mimics "the getting of wisdom" which my three year old has as his full time occupation.
I suspect that genetic algorithms will tend to solve a cluster of problems which surround programming for inductive reasoning. Induction relies upon imperfect and incomplete information. By definition. So a strategy for creating ever more inductive systems genetically would be to have a variety of programs which attempted to extrapolate from incomplete data. The selection criteria, generation to generation, would be how well the programs in the population dealt with the incompleteness. While the quality of the extrapolations might be the primary criteria for success, the speed and robustness of the programs would matter as well. Moreover, a program for general induction would have to be challenged with a variety of incomplete data sets. A program which was very good at predicting the next number in a series might be hopeless at determining the next shape in a series.
In financial modelling - where genetic algorithms have been used since the early 1990s - one of the most basic issues is that the evolution of the algoritms can mean that the modelers begin to have less and less of an idea why the program is making the predictions it is. Legally this is problematic; but it is a foreseeable outcome of virtually any genetic algorithm. The fact is that the end product of an evolutionary programming project is not predictable. What is predictable is that machine intelligence is unlikely to evolve in a way which humans would expect unless the "think like a human" constraint is part of the selection criteria. (Which, of course, is to presume that we can specify "how humans think".)
Building induction into machines is, I suspect, the first step towards something which looks like machine intelligence; what is not clear is that machine intelligence and consciousness are more than tangentially related. While I would not be at all surprised to see machines which exhibited intelligence over the next few years, I would be astonished if a machine exhibited a genuine preference for a blue over a red marble - again, something my three year old is happy to do at the slightest provocation.Update:
Strangely enough, a column by David Brooks on the vital British philosopher Michael Oakeshott catches why expert systems could not be made to work:
In his 1947 essay, "Rationalism and Politics," he distinguished between technical and practical knowledge. Technical knowledge is the sort that can be put into words and written down in books. If you pick up a cookbook, you can read about the ingredients and proportions and techniques for preparing a meal. Update #2: David Janes
But an excellent cook brings some other body of knowledge to the task, which cannot be articulated. This knowledge comes from experience. It can't be taught but must be acquired through doing, by entering into the intrinsic pattern of the activity.
is sceptical about my faith in evolutionary computing and brings expertise to bear on the actual way in which vision works in people here
. He sees the role of evolutionary computing as "a useful technique, but not framework for the "solution"." I think it is a piece of the solution and, because its randomness does, to a degree, mimic nature it may be a vital technique. A good piece all round.