This text was printed in the Ylem Newsletter, vol.15: number 6.
Ylem is an international organization of artists working with science and technology.

Machina Sapiens

copyright 1995 Tamiko Thiel

The dream of building living machines is perhaps older than the machine itself; the ancient Greek god Hephaestus, for example, built mechanical women to help him in his smithy. The dream of building "machines that think" is much younger, and continues to evolve with our changing perception of the nature of intelligence and thought. In the early 1980s I participated in this dialogue as the visual designer of a machine built to be the first of a new genus: Machina Sapiens, the thinking machine. This article is an informal sketch of some images, historical and fictional, that formed our perceptions of machine intelligence and the computer as "electronic brain."

The ancestors of the modern computer were adding machines built in 17th century Europe. They were called "calculating clocks", having developed from the gear-driven clocks that were the most sophisticated machines of that time. These calculators were decidedly mechanical: one entered the numbers to be added on a set of dials and then rotated the gears with a crank to produce the final sum.

Throughout the 17th and 18th centuries inventors created wonderfully lifelike gear-driven automatons: mechanical ducks who ate, digested and shat food; doll-like human replicas that could write, draw or play the piano. If mechanisms could be created that not only looked human, but also possessed exactly those abilities that separated the accomplished "gentleman" from the uncouth peasant, could machines climb the final hurdle and develop the ability to think?

The mathematician-philosophers of the time said no. Descartes delineated the "rational soul", and with it the ability to think, as the quality that set humans apart from both machines and animals. The latter might be able to "parrot" the sounds of speech, but could not invest those sounds with meaning. Gottfried Leibniz, himself the inventor of one of the early calculators, pointed out that if one were to examine the interior of a machine that seemed to think and have perceptions, one would find nothing but inanimate parts that drive each other, never something that could be the source of thought or perception. In the human body, on the other hand, no matter how closely one examined an organ, one always found a yet smaller "organ" that also contained the vital stuff of life.

The highest level of thought was logic, which involves the ability to follow complicated chains of statements and conditions, consider alternatives and make decisions based on these judgments. The machine invasion of this realm started in the 19th century with the development of tools for machine logic. George Boole showed how the rules of logic could be represented by a digital algebra based on the elements 0 and 1 and simple addition and subtraction. Working separately from Boole, Charles Babbage came up with the fundamental design concepts for a digital universal computer: a programmable machine capable of complicated conditional decision-making.

Unfortunately Babbage never found the funding he needed, so his "Analytical Engine" had to wait for the technological imperative of World War II. In 1943, Howard Aiken built the Mark I, a computer so close to Babbage's design that Aiken admitted "If Babbage had lived 75 years later, I would be out of a job". This machine was still relentlessly mechanical, however: the sound of its 3000 electro-mechanical relays clacking open and closed reminded a visitor of "a whole room full of little old ladies knitting away with steel needles."

Just two years later John Mauchly and J. Presper Eckert made the Mark I obsolete with their "Electronic Numerical Integrator and Computer", lovingly called "ENIAC". ENIAC, composed of over 17,000 electron tubes, was the first machine to be called an "electronic brain". A reporter described ENIAC as being "faster than thought", capable of multiplying 2 10-digit numbers in 3/1000 of a second. Although succeeding decades brought tremendous advances in processing speed and data storage capacity, conceptually the modern computer had arrived. John von Neumann's brilliant summary of ENIAC's successor EDVAC has remained the standard design for single-processor electronic digital computers until this day. (It took a while, however, before popular imagination accepted the "electronic brain" as the height of technical sophistication. Even Von Neumann was described by fellow scientist as having a brain like "a perfect instrument, whose gears mesh with an accuracy of 1/1000 of an inch"!)

When computers became electronic Leibniz's reasoning lost its simple clarity. The workings of any electronic component, just like the workings of a cell in the human body, happen at a microscopic level that removes them from the physical world of mechanical gears to the abstract world of invisible natural forces. Electronic components are no longer so obviously "inanimate" as metal gears. Curiously, at this point Alan Turing also turned Descartes' argument on its head: Turing maintained that in a typed conversation with a hidden machine, if one couldn't tell whether the conversationalist was human or not the machine had to be considered intelligent. This famous "Turing test" is the ultimate meritocracy: it's not what you are; it's what you can do.

By 1969 the electronic brain had entered popular culture: HAL 9000, the brain and nervous system of the spaceship in Arthur C. Clarke's "2001, A Space Odyssey". When HAL exhibits the rather human behavior of going psychotic and killing the other crew members, the remaining astronaut enters HAL's memory centers and performs a frontal lobotomy, shutting down the higher centers that control memory and speech. Our fear of the superior physical power of machines was augmented by the fear that machines will destroy us for philosophical and psychological reasons as twisted as any human being (see also the brilliant satire of 2001, "Dark Star"!)

In 1981 Stanislav Lem came up with a more positive view of an artificial brain in his book GOLEM XIV. The GOLEM series was developed by the military to do strategic war planning, just like Mark I and ENIAC. In contrast to Clarke's neurotic killer, however, GOLEM XIV declared that he was completely uninterested in the Pentagon's war doctrine in particular, and in the geopolitical position of the USA in general. A machine developed to be his successor declared that geopolitical problems were nothing compared to the ontological questions of existence, and the best guarantee for world peace is general demilitarization. (This machine was demolished immediately, of course.)

Lem summed up the dreams of GOLEM's designers with these words: "they transferred first their brains and then their thoughts alone into shining housings of metal and plastic. ... [and stored] their knowledge in the structure of space and their thoughts in the waves of light. Thus, they freed themselves from the tyranny of material and became creatures of light. ... pure energy."

Although technology has not advanced far enough to realize Lem's philosopher-machines, a similar vision powered the design of the Connection Machines CM-1 and CM-2, built by Thinking Machines Corporation. The design had tens of thousands of simple processors richly connected together like the neurons in the human brain. Like the human brain, the connections between processors could change depending on the problem to be solved. My job was to convey this exciting new architecture in the visual appearance of the machine, to make it immediately apparent this was a machine unlike any you had ever seen before.

The final design was a massive, 5 feet tall cube formed in turn of smaller cubes, representing the 12-dimensional hypercube structure of the network that connected the processors together. This hard geometric object, black, the non-color of sheer, static mass, was transparent, filled with a soft, constantly changing cloud of lights from the processor chips, red, the color of life and energy. It was the archetype of an electronic brain, a living, thinking machine.

With the fall of the Berlin Wall, the dissolution of the Evil Empire and thus the end of the Cold War, the funding for such supercomputers has disappeared. Parallel computing technology has entered the marketplace on a smaller scale in cheaper machines. Techniques that came out of artificial intelligence research have been enriching mainstream computer science. Artificial intelligence researchers have turned their sights towards modeling life, with the theory that intelligence does not arise from pure logic but rather from an embodied intelligence interacting with its environment.

After all, Lem said that the evolution of his philosopher-machines was first possible after creation of the "Federal Information Net", which served as a "nutritional matrix" for the creation of artificial minds. The computers that arose out of this process, he said, were the result of natural laws operating on the substrate of symbolic information. And that is one experiment that artificial life researchers are now performing using the substrate of the Internet.


Sources:

Buchheum & Sonnemann, "Geschichte der Technikwissenschaften", Birkhaeuser Verlag, 1990

Chapuis & Droz, "Automata: A Historical and Technological Study", Central Book Co., NY 1958

Clarke, Arthur C., "2001 - A Space Odyssey" 1969

Groves, Robert , "The Greek Myths: Vol. 1", Penguin Books, London, 1955

Langton, Chris, ed., "Artificial Life II", Addison Wesley, 1992

Lem,  Stanislav, "Golem XIV", 1981

Sutter, Alex, "Goettliche Maschinen", Athenaeum Press, Frankfurt am Main, 1988

Time-Life, "Grundlagen der Computertechnik", around 1986

Troitzsch & Weber, "Die Technik", Westermann Verlag, 1982
o [Connection Machine] o