by Oliver Waters
In Part I, we explored some immediate benefits of transitioning from our organic brains to synthetic ones, perhaps the most attractive being effective immortality via ‘backing up’ our digital minds. But now I’d like to venture off into much more speculative territory: how artificial brains may enable us to meet aliens.
People have been scratching their heads about where all the aliens are for a long time. Perhaps the bleakest theory is that we are truly alone in the vast darkness of spacetime. It’s also the most boring theory. The problem with the more interesting option – that other intelligent life forms exist – is that there seems to have been more than enough time for advanced aliens to mosey on over to our neck of the woods, yet they haven’t. This is the ‘Fermi paradox.’
The most compelling answer to this paradox may be that they don’t want to visit us. After all, a highly advanced civilisation would obviously be technologically capable of venturing out into their solar system, galaxy, then eventually inter-galactic space. But would they be motivated to do so? This might seem like a stupid question, at least to any fans of Star Trek. Of course they’d be motivated to explore the universe! What kind of incurious moronic species would they have to be to decline such an exhilarating adventure!?
But the stark reality is that outer space is rather boring. It’s mostly dark and empty, and the clumps of matter and energy that punctuate its monotonous fabric are identical to those in any local environment. It’s the configurations of fundamental particles that are the truly interesting things. The most interesting of these configurations are other rational and creative beings: namely people. We dream of conversing with aliens with richly imaginative personalities like E.T, not of scooping Martian amoeba into petri dishes.
People are unique in possessing stories, based on the personal journeys they have experienced with psychological continuity. They remember their childhoods, the challenges they’ve overcome, and those they have loved and lost. They have hopes for the future, for themselves, their families, and their species.
As I argued in Part I, ‘you’ are ultimately the set of abstract information encoded in the neural wiring of your brain. If that data were extracted and installed into a different but suitable kind of physical brain, you would come back into the world, as though waking from a short nap. According to Derek Parfit’s reductionist view of personal identity, people are the compositions of mental parts that persist over time. ‘I’, in the most important sense of the word, therefore exist in the universe wherever my traits, beliefs and values exist.
The set of all persons therefore exists not within the physical realm of atoms and extended objects, but the abstract space of possible persons — all the combinations of the ways a self-aware agent can be. And just as with physical space, we can explore this abstract space with a mature science of how minds work. This would provide us with the fundamental principles constraining what kinds of minds can exist, and what kinds of minds cannot.
For example, I assumed in Part II that a future science of cognition will explain in detail why Hannibal Lecter is an impossible psychological being. This would be much like how chemistry today explains why certain molecules, like HeF6 (diatomic helium hexafluoride) are impossible, even though they are ‘conceivable’ to us.
While such a future science would sketch the broad contours of possible minds, and empower our engineering efforts to create them, generating novel contents of minds is ultimately not a scientific pursuit. This is rather the purpose of artistic endeavours. Our novels, films, paintings, operas — they all seek to represent new and interesting characters, situations, or states of being. When a writer conjures a character, she endows him with a rich collection of traits, memories, and desires. If the writing is any good, this psychologically plausible character is not just an arbitrary creation — but a discovery of an abstract entity — much like discovering the number pi, or the concept of democracy.
Do such ‘fictional’ characters actually exist? In a common-sense way, of course they don’t. After all, they do not in fact possess the physical attributes ascribed to them.
But they do exist in the mind of the author, and then in the minds of the audience. When an author or an actor properly inhabits one of their characters, there is an alternative personality temporarily using their neural hardware to perceive, think about, and interact with the world. This is what distinguishes truly great acting from poor impersonations. When in character, Daniel Day Lewis simply is Daniel Plainview, the protagonist in the film There Will Be Blood (2007). You can ask him any question about his life or aspirations, and he’ll give you a detailed response, drawn from his difficult life as a 19th century ‘oil man’.
Granted, the people created in fiction are usually not as ‘fleshed-out’ as living, breathing human beings. For instance, there is a fact of the matter as to what I had for breakfast on my 27th birthday, but this is not the case for Harry Potter (although J.K. Rowling might fill us in one day). There’s no reason in principle however why a fictional character cannot be more richly detailed than a flesh and blood human being on Earth. Make a novel or television series long enough, and there are no bounds to how detailed fictional personas can become.
There is also the objection that fictional characters are not autonomous, like us. This follows from the misconception that such characters exist only in the passive letters of a page, or the bouncing colours on a TV screen. In fact, characters truly exist in our minds — the stimuli in novels and projectors merely aid our brains’ imaginative capacities. True, the characters do not take over our bodies entirely, like a body-snatching alien, but they do possess us to a very real extent. When watching Breaking Bad, we think like Walter White when he’s got his back to the wall, about to be discovered for the accidental drug-dealing killer he has become. The beads of sweat drip down our foreheads as we furiously scramble for an escape plan.
To be lost in fiction is to have the software of a character run rampant in your brain. Only when the character does something inexplicable do we ‘snap out of it’ and reach for the remote to change the channel.
To sum up so far — people exist in minds, and minds require brains. But they don’t require specific brains inhabiting specific spots in the universe. People exist within narratives — realms of cause and effect, where their intentional actions lead to some outcomes, and not others. These narratives mostly occur in front of generic backdrops — just some planet, orbiting some random star.
What on Earth does all this have to do with the Fermi Paradox? Well, if the abstract space of all interesting characters exists, and we have the capacity to explore that space through artistic creativity, this sets up an appealing alternative to exploring the physical universe for aliens.
An advanced civilisation may decide to focus its resources on creating more and better simulations of characters and stories, rather than physically venturing ever deeper into the dark abyss of inter-galactic space. Why send precious materials out on dangerous expeditions when those materials could be invested in greater computational resources to represent an ever-larger portion of the space of possible persons?
In Part II we saw the arguments for the universality of human cognition, given our the ability to create ever more advanced neural hardware. In principle, any similarly universally intelligent creature could eventually create all kinds of minds that could possibly exist, all from the safety of their home planet. This could prove to be a lot more efficient than exhaustively trawling the mostly deserted physical universe.
This line of thinking is reminiscent of the philosopher Nick Bostrom’s intriguing hypothesis that we ourselves are simulations running on such alien hardware. But if the claim is that our entire history is a simulation on an alien machine, then this is highly unlikely.
Firstly, most of the suffering endured so far on this planet would be tedious to any advanced mind. Why bother simulating experiences of pointless plague, famine, war crimes, and torture when there are far more compelling tales to tell? Of course, this wouldn’t rule out simulating interesting and necessary forms of suffering, in the service of a novel and noble purpose. Cue Rocky style training montages. But bland, repetitive, pointless pain should always be left on the cutting room floor.
More importantly, any civilisation capable of accurately simulating the conscious states of human beings would also possess a level of moral knowledge that would inhibit them from inflicting such needless cruelty. As we discussed in Part II, moral and practical beliefs are mutually informing and constraining. Aliens with the technological prowess to computationally simulate minds will necessarily possess deep scientific theories which will inevitably inform their beliefs about who belongs in their inner circle of moral equals: namely, us.
They would recognise it as morally wrong to simulate the exact history of our species here on Earth, replete with torture, famine, rape and all. And they would recognise that simulating all this turbulent noise would not be necessary to genuinely ‘make contact’ with us. The parts of us worth simulating, worth experiencing, are separable from the evil aspects of our collective history. There is nothing stopping the most important and meaningful stories from being discovered anywhere. These might be stories of courage, of the pursuit of beauty, or the growth of friendship through grand challenges.
So far this argument suggests that any particular civilisation in the universe enjoys an unbounded scope of experiences that it can in principle discover. But in practice, infinities are unreachable, and there will forever remain valuable aspects of alien civilisations that we don’t end up discovering ourselves. This is due to the path-dependency of civilisational progress, which closes off certain options given the different routes taken. Like monkeys equally capable of climbing a tree, the different branches we choose to grasp as civilisations will expose us to different fruits and dangers along the way.
This means there would still always be great value in physically meeting other alien civilisations. The problem is that the rules of the universe seem to place major limits on our ability to do so.
Given that light, and therefore any communication, takes 100,000 years just to traverse our own galaxy, the cultures of senders and receivers would diverge so radically over such time frames that the messages would not be comprehensible. Consider how quickly languages can split into mutually indecipherable dialects over just centuries of human history – compare this to thousands or millions of years for civilisations that are advancing at a faster rate than we humans typically have to date.
This is a real problem. Distant parts of galaxies cannot communicate with each other in any viable timeframe to solve their imminent problems. There would be no such thing as referendums back to a central government on Earth – as any political conflicts would be ancient history by the time the votes could be counted. These communicative barriers place firm limits on how dispersed a coherent civilisation could become throughout the cosmos, and point us instead to the alternative of a virtual civilisation consisting of physical and fictional people forging narratives in their local patch of universe.
It’s a sobering thought that no two space-faring civilisations could ever truly meet each other. A group of intrepid voyagers may, of course, stumble upon an alien world and learn about their culture. But there would be no way for their home world to make timely contact and for the two cultures to interact and learn from each other. By the time the voyagers’ signals reached home, their home would be unrecognisable.
Most existing advanced alien civilisations have probably realised these points. This brings us to the Transcension Hypothesis, first proposed by John Smart in 2012.
The first key assumption of this hypothesis is that we will likely continue the historical trend of ever-denser compression of structures and energy-flows to enable exponentially more powerful computational capacities. Think of how we went from building computers out of vacuum-tubes to microscopic transistors. Where does this exponential process end? Smart postulates that we will continue engineering our computational devices down below the atomic level. Eventually we will decide to orbit close to the event horizons of black holes (either by migrating to them or creating our own), as they are the densest, most powerful energy sources in the known universe. It goes without saying that we must have transitioned to suitably robust artificial brains by this point, given our current bodies would be torn apart instantly by such violent gravitational forces.
But as we know from general relativity, time slows down for those close to a black hole, relative to observers further away. This means that from our reference frame, the external events in the universe would speed up dramatically. As we approached ever closer to the black hole, the external universe would seem to evolve instantaneously to where it would be billions of years into the future. Most galaxies would drift out of reach due to the mysterious dark energy pushing them apart. But our galaxy is due to merge with the local Andromeda galaxy, and eventually all the black holes within them would merge as well. This means the merging of our civilisation with all the civilisations that are living near their own black holes. In other words, instead of travelling great distances over long periods of time to meet aliens, we may speed up the universe’s trajectory (from our perspective) bringing the aliens to us. Huge if true.
At that point, when we meet those aliens within our local gravitational well, we would want our different approaches to technological development to be maximally varied, in order to learn the most from each other.
This leads us to the Transcension Hypothesis’ second key assumption. Civilisation advancement requires an evolutionary component, consisting of bottom-up variation and feedback. The problem is that the extremely slow rate of communication between distant civilisations in space would only allow for one-directional, command-like instruction. This means that any ‘help’ we received from more advanced civilisations would doom us and other recipients to adopt the identical technological path of development that they did, making us essentially clones of each other. This is not conducive to maximising the variability of ideas that each civilisation could produce on their own autonomous trajectory, and then could share with others once all their respective black holes merge.
A related problem is that no single, one-way message from a more advanced civilisation to a lesser one could convey all ideas that constitute the former civilisation. This means that the latter, in their haste to ‘grow up’ by aggressively embracing all the new alien theories and technologies, will probably destroy themselves due to lacking some important implicit ideas that are necessary to avoid self-destruction.
To use a toy example, imagine we received a message from outer space with instructions to build a new type of energy reactor, which exploits the properties of dark matter. The instructions come with lots of safety warnings, but we couldn’t be certain that these instructions were exhaustive, or even that we’d fully understood the included ones. It would therefore be unwise to build this device, until such time as our fundamental scientific theories had matured to give us confidence that we understood why the device wouldn’t destroy our planet.
We could take a more parochial example from recent geopolitics here on Earth. The United States learned rather painfully that introducing ‘democracy’ to a country like Afghanistan is a perilous enterprise. Without the prior development of liberal and democratic cultural norms, it was always going to be unsustainable to merely introduce the formal institutions of elections. Even though democratic politics is superior to theocracy as a form of government in the abstract, this doesn’t mean that simply adding democratic politics to a society will make it better off.
Because deep space communications can only be ‘command like’, advanced alien civilisations should all come to recognise a ‘prime moral directive’ to not interfere with other life-forms, even if they happen to discover them. That’s why it’s so quiet. That’s why no one out there is reaching out to help us. They know that by doing so, they would just be creating clones of themselves.
The Transcension Hypothesis might make us feel a little lonely in our corner of the universe in the short term. But I think it needs a slight adjustment in the direction of optimism, given the abstract nature of personal identity. Just because aliens are not reaching out to us physically doesn’t mean they are not getting in touch. Somewhere out there, an alien civilisation is likely simulating our minds – not perfectly accurately – but enough to get to know important aspects of us. We should try to do likewise.
Separated by vast oceans of space, our only immediate option to commune together as fellow sentient beings is via feats of mutual abstract creation. At this primitive moment in our civilizational development, we’ve merely glimpsed the partial silhouettes of our celestial friends. Let’s hope we don’t destroy ourselves before we have a proper chance to greet them in future versions of our shiny artificial brains.