Hive consciousness

Final1-960x601

Peter Watts in Aeon (Illustration by Richard Wilkinson):

Rajesh Rao (of the University of Washington's Center for Sensorimotor Neural Engineering) reported what appears to be a real Alien Hand Network – and going Pais-Vieira one better, he built it out of people. Someone thinks a command; downstream, someone else responds by pushing a button without conscious intent. Now we're getting somewhere.

There’s a machine in a lab in Berkeley, California, that can read the voxels right off your visual cortex and figure out what you’re looking at based solely on brain activity. One of its creators, Kendrick Kay, suggested back in 2008 that we’d eventually be able to read dreams(also, that we might want to take a closer look at certain privacy issues before that happened). His best guess was that this might happen a few decades down the road – but it took only four years for a computer in a Japanese lab to predict the content of hypnagogic hallucinations (essentially, dreams without REM) at 60 per cent accuracy, based entirely on fMRI data.

When Moore’s Law shaves that much time off the predictions of experts, it’s not too early to start wondering about consequences. What are the implications of a technology that seems to be converging on the sharing of consciousness?

It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. The neuroscientist Giulio Tononi at the University of Wisconsin-Madison claims that consciousness reflects the integration of distributed brain functions. A model developed by Ezequiel Morsella, of San Francisco State University, describes it as a mediator between conflicting motor commands. The panpsychics regard it as a basic property of matter – like charge, or mass – and believe that our brains don’t generate the stuff so much as filter it from the ether like some kind of organic spirit-catchers. Neuroscience superstar V S Ramachandran (University of California in San Diego) blames everything on mirror neurons; Princeton’s Michael Graziano – right here in Aeon – describes it as an experiential map.

I think they’re all running a game on us. Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention).

But why would any of that be self-aware?

More here.