Before we begin, a quick note: Tim Urban (of Wait But Why and TED Talk fame) wrote an excellent article about Brain-Computer interfaces and Elon Musk’s latest company: Neuralink. And I would happily send all my readers there, but with an important caveat: the article is 36,000 words and 81 pages long — not including any of the many labeled pictures and diagrams, nor the many attached videos, links, and parentheticals. Clearly, this is a man who is trying his best to demoralize content creators.
I am sympathetic to the fact that most people have extremely limited free time, so I have tried to condense Urban’s article down to the most necessary pieces. If any point lacks context, citations, or supporting information, please trust that it is a transcription error on my part, and not a reflection of the material.
Now, onto the lede:
Direct connections between human brains and artificially intelligent computers are coming. And I hope the transhumanists are right, because the alternatives are worse.
We are brains. From the “you” that’s
looking out of your eyes, to your kids and your friends, each of them are all basically
just big gross pudding-y masses of nerve and fat cells. And, with a few limited
exceptions, we don’t understand how it works.
That isn’t to say that we don’t have an
understanding the physical workings of the brain, we (the neuroscientists, not me)
have at least a rudimentary grasp on neurons,
glial
cells, and all the other assorted little
pieces of the pudding. And they understand the big picture a little
bit, the various lobes and superstructures
that are responsible for different kinds of cognition.
Just kidding, this is literally as good as it gets. Not joking. |
Unfortunately for the poor neuroscientists, the human brain is
fiendishly complicated. And when it comes to translating that knowledge into how the brain
actually works — what turns the firing of a neuron into a memory, a thought, or
any of the many operations the brain constantly carries out, forget it. We know
almost nothing. Not because we haven’t tried, but because it’s a very, very
difficult problem.
But when has not knowing how something
works ever stopped people from tinkering around? While our current technologies have all the
subtlety of breaking an egg with a hammer, even our extremely primitive
attempts at establishing direct connections with neurons have had extraordinary
results. (see: Cochlear
implants, “mind-controlled”
prosthetics, etc.)
But, as with many futuristic things, megalomaniacal cyber-man Elon Musk is not sitting on his hands: He's currently hard at work, dragging us kicking and screaming into the future through the terrifying grace of his own example. Which is.. good, I think? His stated values seem to align more or less with my own, though hearing tales of his miraculous works are more likely to fill me with dread than excitement. I’ll have to try and untangle that knot in a future post, because we have far bigger concerns: Artificial Intelligence.
Now I’ve written about A.I. before but, frankly, it’s just difficult to grasp the extent that A.I. is disrupting society already, much less what’s going to happen in the next few years. The end of work, and the end of human endeavor as we know it are not off the table, for if there is a hard limit to the tasks that can be successfully outsourced to artificial minds, we have not yet seen any evidence for it.
Musk argues that our brains are already adapting to having easy access to the “digital tertiary layer” of address books, Wikipedia, and Facebook. To Musk, we’re already cyborgs, Neuralink is just a means to increase the bandwidth.
Of course, right now they’re mostly trying to build better limbs for paraplegics but, you know, baby steps. And, as horrifying as the idea of fully wired brains might be (intrusive thoughts are the pop-up ads of the future), it does seem like a clever way to ensure that some of the worst A.I. outcomes (Skynet, Paperclip maximizers, etc.) don’t come to pass. By literally making the A.I. part of us, we can be the dangerous super-intelligence we’d like to see in the world.
This is right in line with the traditional transhumanist goals of using technology to directly improve humanity, a position that always struck me as too utopian for comfort. The promises sound like the classics: eternal life, enhanced experiences, happiness, etc. My natural pessimism manifests as deep skepticism that people can be meaningfully improved, with a corresponding concern that the improvements will hold.
To put it another way, our bodies are very complicated, and there is (yet) no evidence to suggest that people can stay emotionally or psychologically healthy in artificial bodies for multiple hundreds of years of lifespan extension. Human beings go irrevocably insane and suffer terribly in solitary confinement, but our only way to uncover complimentary situations is to experience them first-hand. Perhaps life in artificial bodies would cause similar degradations, or exciting new ones. I have no faith in the blind idiot god to design human beings that can remain mentally healthy over millennia.
Another satisfied customer |
But making babies happy and deaf people
fiercely
unhappy is not the only application of
brain-computer interfaces. Several
studies
have established that direct communication from brain to brain is possible. At
risk of understatement, this has broad implications.
I want to re-emphasize that point before
moving on, as I agree that the whole idea sounds like science fiction. And although
it may be ripped from the pages of pulp novels, I assure you, it is quite real.
Moreover, it’s possible despite our having next to no understanding of the
processes involved, using technologies only slightly more advanced than
sticking a wire in there and hoping for the best. Welcome to the far-flung
future of the 21st century.
Now, I can't speak for everyone, but I personally find it very overwhelming trying to imagine how society may change by having the ability
to receive other people's thoughts, ideas, images, and experiences directly into our brains.
Makes my previous fretting over Virtual
Reality seem positively quaint, by comparison.
But, as with many futuristic things, megalomaniacal cyber-man Elon Musk is not sitting on his hands: He's currently hard at work, dragging us kicking and screaming into the future through the terrifying grace of his own example. Which is.. good, I think? His stated values seem to align more or less with my own, though hearing tales of his miraculous works are more likely to fill me with dread than excitement. I’ll have to try and untangle that knot in a future post, because we have far bigger concerns: Artificial Intelligence.
Now I’ve written about A.I. before but, frankly, it’s just difficult to grasp the extent that A.I. is disrupting society already, much less what’s going to happen in the next few years. The end of work, and the end of human endeavor as we know it are not off the table, for if there is a hard limit to the tasks that can be successfully outsourced to artificial minds, we have not yet seen any evidence for it.
But even a moderately intelligent A.I.
could have transformative effects when connected directly to our brains. Urban
and Musk imagine a SIRI-esque A.I. capable of responding directly to thoughts —
and communicating directly with our nervous systems.
The easy example: if you wondered for a
second what the capital of Madagascar is, the A.I. could deliver it instantly —
eliminating the tedious steps of opening your phone, typing it in, and
scrolling the search results until you find the answer (it’s Antananarivo).
You could simply “know” any knowable fact.
Musk argues that our brains are already adapting to having easy access to the “digital tertiary layer” of address books, Wikipedia, and Facebook. To Musk, we’re already cyborgs, Neuralink is just a means to increase the bandwidth.
Of course, right now they’re mostly trying to build better limbs for paraplegics but, you know, baby steps. And, as horrifying as the idea of fully wired brains might be (intrusive thoughts are the pop-up ads of the future), it does seem like a clever way to ensure that some of the worst A.I. outcomes (Skynet, Paperclip maximizers, etc.) don’t come to pass. By literally making the A.I. part of us, we can be the dangerous super-intelligence we’d like to see in the world.
This is right in line with the traditional transhumanist goals of using technology to directly improve humanity, a position that always struck me as too utopian for comfort. The promises sound like the classics: eternal life, enhanced experiences, happiness, etc. My natural pessimism manifests as deep skepticism that people can be meaningfully improved, with a corresponding concern that the improvements will hold.
To put it another way, our bodies are very complicated, and there is (yet) no evidence to suggest that people can stay emotionally or psychologically healthy in artificial bodies for multiple hundreds of years of lifespan extension. Human beings go irrevocably insane and suffer terribly in solitary confinement, but our only way to uncover complimentary situations is to experience them first-hand. Perhaps life in artificial bodies would cause similar degradations, or exciting new ones. I have no faith in the blind idiot god to design human beings that can remain mentally healthy over millennia.
The Blind Idiot God: Not especially photogenic |
There are other arguments against
transhumanism, but they’re more abstract and philosophical (i.e. can we lose the essence of
humanity), and as such are more difficult to address. But personally, I have
now been convinced of its necessity. Despite any justified misgivings, the
existential threat that A.I. poses to humanity justifies extraordinary (and
unpleasant) measures. Hooking ourselves up to any A.I. at least gives us a
chance of keeping it on a short leash, greatly improving our chances of
ensuring that our created intelligences respect human wishes and desires.
And the danger is real. To belabor an overused metaphor, we’re
apes on the verge of creating fire. It’s an extremely dangerous tool with infinite applications. The difference between cooking your food and
burning down a forest is a stiff wind, or a poorly made campfire. Likewise, the worst-case A.I. scenarios
are unimaginably bad, like “I hope there’s no other sentient life in the galaxy
because they’ll be dead too” kind of bad. This is not even an exaggeration; if
our A.I. became far enough gone to turn the planet into computer chips, there’s
little reason why it would stop there.
Grey goo: the forest fire of the future! |
Unfortunately, not inventing A.I. does not appear to be on the list of possible
outcomes, so practical strategies are needed. While putting the A.I. in our
heads trades one set of terrifying unknown problems for another set of
terrifying unknown problems, in this case, it seems like the better of our
options. If there’s going to be a fire, we should at least keep a close eye on
it.