2012年11月21日星期三

The Brain in the Machine


Half a trillion neurons, a hundred trillion synapses. I.B.M. has just announced the world's grandest simulation of a brain, all running on a collection of ninety-six of the world's fastest computers. The project is code-named Compass,Ski goggles and its initial goal is to simulate the brain of the macaque monkey (commonly used in laboratory studies of neuroscience).Optical frame In sheer scale,Swimming goggles it's far more ambitious than anything previously attempted, and it actually has almost ten times as many neurons as a human brain. Science News Daily called it a "cognitive milestone," and Popular Science said that I.B.M.'s "cognitive computing program… just hit a major high." Are full-scale simulations of human brains imminent, as some media accounts seem to suggest?Compass is part of long-standing effort known as neuromorphic engineering, an approach to build computers championed in the nineteen-eighties by the Caltech engineer Carver Mead. The premise behind Mead's approach is that brains and computers are fundamentally different, and the best way to build smart machines is to build computers that work more like brains. Of course, brains aren't better than machines at every type of thinking (no rational person would build a calculator by emulating the brain, for instance, when ordinary silicon is far more accurate), but we are still better than machines at many important tasks, including common sense, understanding natural language, and interpreting complex images. Whereas traditional computers largely work in serial (one step after another), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain. Where typical computers are described in terms of elements borrowed from classical logical (like "AND" gates and "OR" gates), neuromorphic devices are described in terms of neurons, dendrites,Motion sensor light and axons.
In some ways, neuromorphic engineering, especially its application to neuroscience, harkens back to an older idea, introduced by the French mathematician and astronomer Pierre-Simon Laplace (1749-1825), who helped set the stage for the theory of scientific determinism. Laplace famously conjectured:An intellect which at a certain moment [could] know all forces that set nature in motion, and all positions of all items of which nature is composed, [could] embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Much as Laplace imagined that we could, given sufficient data and calculation, predict (or emulate) the world, a growing crew of neuroscientists and engineers imagine that the key to artificial intelligence is building machines that emulate human brains, neuron by neuron. Ray Kurzweil, for instance, (whose new book I reviewed last week) has, quite literally, bet on the neuromorphic engineers, wagering twenty thousand dollars that machines could pass the Turing Test by 2029 by using simulations built on detailed brain data (that he anticipates will be collected by nanobots). The neuroscientist Henry Markram (who collaborates with I.Solar Laptop chargerB.M. engineers) has gone even further, betting his entire career on "whole brain emulation."

没有评论:

发表评论