American Mensa Header

Pick Your Brain

Looking (far) ahead to intelligence-enhancing tech: Should we take the upgrade?

Pick Your Brain header image

It will not be we who reach Alpha Centauri and the other nearby stars. It will be a species very like us, but with more of our strengths, and fewer of our weaknesses […] more confident, farseeing, capable, and prudent [….]

— Carl Sagan, Pale Blue Dot

This was supposed to be a crazy-idea article about increasing human intelligence. Then, Elon Musk goes out and launches Neuralink, a brain-computer interface venture, and the idea went from crazy to mainstream overnight.

Now we are left to wonder not only how we could increase human intelligence but also if we should make the attempt at all and if such mental enhancement will lead to the saving of civilization or deliver a future so grim it’ll make 1984 look like Sesame Street.

Building a Better Brain

Unfortunately, we do not yet know how to noticeably increase intelligence, but we have found some avenues to explore.

Smart drugs, or nootropics, especially popular in recent decades, have been around for millennia, with people using everything from coffee and chocolate to Ritalin to give their brains a boost. It’s far too big a field to do justice here, but suffice it to say there is plenty of information of varying provenance available and a large subculture of self-experimenters. Clinical results appear mixed and modest, with no sign of a Limitless-style pill that would transcendentally increase brainpower.

We could always try adding intelligence with editing tools such as CRISPR. Problem is, we don’t know which genes to edit. While intelligence is certainly heritable, there doesn’t seem to be a single gene that affects intelligence but rather cumulative tiny effects from hundreds of genes.

Pick your brain original artIn a 2013 paper in Global Policy, “Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?” Carl Shulman and Nick Bostrom hedge on the prospect of gene manipulation improving intelligence any time soon. “[…] our understanding of the genetic correlates of cognitive ability is very limited today,” they wrote. Shulman and Bostrom go on to suggest that iterated embryo selection might eventually produce a gain of 300 IQ points. By definition, this is available only to future generations and, like CRISPR gene editing, requires us to know much more about what we’re selecting for.

Even if we can stomach the whiff of eugenics that genetic engineering carries, overcome the medical pitfalls, and weather social backlash from the nearly inevitable tragic mistakes, any biological improvements to intelligence necessarily work at a generational snail’s pace. That’s too slow for those who seek a quicker, technological fix.

The big fish here, of course, is Nuralink. Musk’s new company was founded with the aim of producing a brain-computer interface. Ultimately, he hopes to avoid the threat of human extinction from artificial intelligence by merging human minds with Nuralink technology. Musk may be the life of this particular party, but he’s not the first guest to arrive.

High priest of the singularity Ray Kurzweil was one of the first to propose using nanotechnology to put the machine in the consciousness rather than the consciousness in the machine. Last year, entrepreneur Bryan Johnson founded with $100 million of his own money the company Kernel to start engineering that reality.

The company is hoping to build what it calls a neuroprosthesis for cognition, an implantable chip to help victims of neurological ailments such as strokes, Alzheimer’s, and concussions. Johnson hopes that such memory prosthetic devices will eventually enable intelligence amplification. There’s also the U.S. military’s research into transcranial direct current stimulation.

Pulsed magnetic stimulation of the brain has been shown to halve learning time for U.S. drone pilots and improve concentration and focus. Several experimenters are exploring the process as a way to increase cognition and fight depression, but long-term effects are unknown. Even Facebook wants to get a slice of the high-tech brain interface action, as indicated with the recent announcement that it’s working on a “silent speech” brain-computer interface.

A Trillion Moving Parts

Don’t get too excited just yet, though. Neuralink’s, Kernel’s, and Facebook’s projects are barely more than aspirational. They represent basic research into the field — important, essential, perhaps even species-preserving research, but basic. The human brain is hugely, vastly, ridiculously complicated — “a machine with a trillion moving parts,” according to cognitive scientist and philosopher Daniel Dennett. We know only the rudiments of how its various bits operate; the jury is still out on how intelligence functions; and no one has a clue about consciousness.

Right now, efforts toward increasing human intelligence via machine interfaces are at the Montgolfier balloon stage of putting a man on the moon. It may not take 200 years to get usable brain augmentation technology for the masses, but it won’t be this decade and probably not the next one either.

To start with, it’s hard to resolve the small electrical signals inside a mass of squishy brain filled with other electrical signals housed in a thick bone skull. That’s not to say there haven’t been successes: Monkeys teleoperate robots via brain waves; human volunteers control arm movement via brain impulses transmitted across the internet; and mind-reading computers resolve individual letters, shapes, and even faces. These are all promising but early steps, akin to hearing neural shouts when we’re trying to listen for whispers.

We can drop wires directly into the brain for a (somewhat) better neural signal resolution. For a decade doctors have been doing just that, using direct brain stimulation to treat depression, Parkinson’s, epilepsy, and even obsessive-compulsive disorder, but the practice isn’t without problems. Patients have reported disassociation and reduced verbal fluency, among other side effects. There is also the issue of infection along with the other life-threatening risks associated with any major surgery.

The aim, of course, is not to have to dig into the brain at all. Dongjin Seo, a neurotechnologist at the University of California and a member of Neuralink’s core team, has suggested using “neural dust,” thousands of micrometer-size sensors in the cortex that report to transmitters outside the skull. Similarly, Kurzweil has proposed putting nanocomputers in our blood to communicate with machines. These ideas are interesting and inspiring but nowhere near even prototype stage.

We may not need to fully understand all these chunks of brain science to engineer a working interface, though. We can, to some extent, treat it as a “black box” problem. Trial and error will tell us the difference between signal X and response Y. Maybe that will be good enough. On the other hand, it may well turn out that, in retrospect, we will wish there were chunks of brain science that we had understood better before plowing ahead, as we wish we had for asbestos, nicotine, and thalidomide.

It’s even possible that our current paradigm is completely wrong. In his cynical, everyman take on the transhumanist movement, To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death, fellow Dubliner Mark O’Connell argues that the current metaphor of the brain as a computer is the product of researchers’ IT backgrounds and may be just as misleading as earlier steam engine, clockwork, and watermill metaphors. From the simple neural nets of the first jellyfish, the brain has been coevolving with the body for hundreds of millions of years.

Would you, in short, trust your mind to the cloud?

For example, the vast nervous system of the gut is second to only the brain in complexity and has direct connections to it. Signal traffic is two-way, and it may be that the state of our gut can affect our brain just as much as vice versa. It’s possible we are mistaken in thinking that we can treat the brain as a discrete or disconnected system at all.

If we are able to create an effective human-brain interface, it’ll need to work much more reliably than any current computing technology. How often have you had to force quit a program or restart your computer or phone to fix a problem? How many software updates break the thing they are trying to fix? Errors, bugs, and “undocumented features” are annoying today but possibly much more so tomorrow should we connect our heads to machines. You don’t want some brain-computer glitch turning your mind into an unrecoverable file.

Security is a whole other issue. Massive privacy breaches, information piracy, and data thefts hardly make a ripple in the news pond anymore. It seems nobody can keep their information safe. So what happens when your brain is a readable device? Even worse, when you change your mind about some political candidate, religion, or soft drink will you ever be able to feel fully secure that it was your own decision and not some subtle marketing virus rewriting your preferences?

Would you, in short, trust your mind to the cloud?

Money for Neurons

We’ll need a very different financial system to support technological modifications to our minds. Imagine we could produce such systems today. Who would pay for them? Maybe only the super-rich would be able to afford the premium service while the rest of us would use ad-supported brain mods with the Coke jingle going off in our heads every time we felt thirsty. Worse, what sort of nightmarish financial slavery scenario would result if brain mods needed costly updates every year or if cognition was based on a subscription service? No payee, no thinkee?

Even if you did keep up with the payments, it might turn out that your service supplier owns your thoughts. Maybe read the terms of service agreement carefully before signing up. That’s if you even have a choice whether or not to use the technology. Military personnel or even ordinary employees might find some sort of brain augmentation a mandatory part of their job in the near future — or even near present. Consider recent accusations against some Silicon Valley companies that allegedly “encouraged” staff to microdose on LSD to enhance concentration.

Pick Your Brain original art

Freedom of thought has historically been the ultimate right, but it’s all too easy to see that freedom eroded, or voluntarily surrendered in return for convenience, much as our other privacies already have. Neuroethicists have proposed a series of new human rights for the mind, but even if such rights are universally accepted, they would probably be just as vulnerable to abuse as the rights we already have.

Kurzweil and others have suggested an analogy with cellphone technology. Wealthy early adopters get the first stage of cognitive enhancement gadgets, but we can rely on computing’s usual faster, smaller, cheaper trick to go from expensive status symbol to ubiquitous utility in a similar time frame. Sure, I have an $80 smartphone on my table right now that’s immeasurably superior to the $4,000 DynaTac that Michael Douglas sports in Wall Street, but maybe we need a different tech metaphor.

Twenty years ago, I watched at my local movie rental store a tsunami of newfangled DVDs sweep away the shelves of VHS cassettes. Ten years later, the even newer-fangled Blu-ray disks appeared, and I confidently expected a repeat of the previous pattern. In fact, the store got to two shelves of Blu-rays then closed because everybody was getting their movies via download instead.

New tech doesn’t necessarily scale like old tech. Cognitive enhancement isn’t Gordon Gecko with his comically large cellphone. It’s a fundamental change in the human condition, a massive upgrading of our primary survival tool. Handing it exclusively to an already entrenched elite of the wealthy and powerful — far too few of whom have ever shown any interest in the common good or a big-picture outlook of mankind’s future — might well be as big a threat to our species as AI overlords.

Too Clever by Half?

With all these technical hurdles and potential disasters in store, why the heck should we try to make ourselves smarter? Surely we’re as clever as we need to be, perhaps even too clever in some instances. How curious a coincidence it would be if, out of all the possible benefits intelligence exhibits or could exhibit, ours is already at the optimal level? More likely, I think that proposition is a fusion of self-justification and failure of imagination. If you ever find yourself discussing this with “too clever” proponents, you might ask them how many IQ points they could stand to lose and to what benefit.

Every new technology, from the introduction of electricity to test tube babies and probably even the discovery of fire, has faced some opposition. While there are valid concerns about the introduction of brain enhancement technology, simple novelty shouldn’t be one of them.

What about artificial intelligence? Not everybody thinks AI will be the end of us. Maybe it will be benign or at least controllable. Can’t we let AI do our thinking for us? We might get the technology to work, but morally you can no more build a conscious AI to solve your problems than you can raise a person to farm your cotton. If we do set out to build a race of self-aware machine slaves, what can we expect but revolution? You could try to employ a super intelligent AI, but what could you offer in payment? What could a tribe of chimpanzees or a pool of paramecium offer you?

What if with AI augmentation we create a race of evil geniuses? Intelligence doesn’t necessarily correlate with being the good guy, but it doesn’t correlate with being the bad guy either. A human being can be kind or cruel, gregarious or a loner, quick tempered or calm, cheerful or depressed, brave, cowardly, empathic, selfish, or generous. There are countless traits we can have, sometimes contradictory ones. These attributes are derived from our deep brain chemistry, our environment, and our psychology; they form our unique personalities. Draped over this like a blanket of snow on hills is our cerebral cortex, the part of our brain that allows us to think. How well you think is not directly related to your personality any more than the hills decide how deep the snow falls. But just as you can see the shape of the landscape under the snow, you can see the shape of the underlying personality beneath intelligence.

Pick Your Brain original art

Personality informs and motivates your intelligence (of whatever level), deciding how, when, and why you’ll use it. People with enhanced intelligence would be just as happy, sad, evil, benevolent, obsessive, engaged, trivial, and profound as you or I are right now.

Doubtless some will misuse the new gifts that cognitive enhancement would bring. A handful of clever sociopaths in positions of power could cause damage vastly out of proportion to their numbers, but how is that any different from today? Most people aren’t sociopaths now and wouldn’t be then. If super intelligence allows these bad apples to inflict more harm than before, wouldn’t super intelligent good guys also be better equipped to mitigate their damage or, even better, address some of the underlying issues that create bad apples in the first place? How well could a Moriarty thrive in a world full of Sherlocks?

Intelligence is the one thing humans are really good at. It’s our defining trait. We’re certainly not faster or stronger or even more cooperative than other animals. Cheetahs, chimps, and termites, respectively, have us beat hands down in those departments. But we are clever. Humans might be one-trick monkeys, but our trick is a damn good trick, good enough to enable a bunch of scraggily, also-ran hominids to conquer the world. Greater intelligence won’t necessarily make us angels, but it will make us better problem solvers, better composers, better engineers, better teachers, better leaders, better planners, makers, and doers — in short, make us better at being human.

Look around you. Pretty much everything you see, hear, feel, or even think about — every aspect of your existence — is either the direct result of or mediated by other people’s clever ideas. Some of these ideas are old, such as the writing I’m doing now. Other ideas are newer, such as the computer I’m writing on. Almost all were scorned or derided until universally adopted for their obvious utility. Many are so ubiquitous that we give them no more thought than a fish does water.

But from the roof over your head to the shoes on your feet, you are literally supported by millions of clever ideas created, developed, adapted, improved, and embellished by millions of clever people. Every bolt, brick, thread, and liberty of civilization is the result of our intelligence, a general purpose problem-solving mechanism that has given us everything from knapping flint and agriculture to Beethoven’s Ninth and Ebola vaccines. Without our intelligence we are just a tribe of frightened monkeys, our destiny the jaws of some predator, our legacy a brief scream in the dark.

The Spherical Cow

Our brains got us this far, so why change a winning formula? Over the last few thousand years our intelligence might have given us unmatched power, beauty, and knowledge, but now we’re banging against the ceiling of what our brains can do.

Spherical cow gif
The Spherical Cow (Source: Wikimedia Commons)

You might have heard the spherical cow joke: A farmer’s cows stop giving milk, so he asks the local university for help. The chemistry professor can’t find a solution, and the biology professor can’t find a solution, but the physics professor says, “I think I have figured out the problem. First, imagine a spherical cow in a vaccuum….”

Almost every field of human endeavor has grown too complicated for us to understand directly, the mental models too complex to hold in our heads, the equations too difficult to solve. So we make simpler models, solve simpler equations, and hope that those answers apply, more or less, to the real thing. We might be missing some important details in understanding the universe around us, but how will we ever know?

It’s not just the big questions in science that are the problem either. Even on a day-to-day basis, we exist in a network of social, scientific, and economic systems that have ballooned beyond our capacity to effectively model or manage them with the mental toolkit of a barely evolved plains ape. Barring catastrophe, that complexity will only grow, and the suite of brain apps that evolved to serve us so well on the savannah are a million years out of date for a technological society. Turns out, we don’t think nearly as much as we think we think.

In recent years researchers have sketched out the surprisingly narrow range of our brains’ abilities and the tricks it preforms day to day, even minute to minute, to work around them. These mental foibles and cerebral slights of hand have been popularized in many books, articles, and blogs but perhaps most clearly in Daniel Kahneman’s seminal 2011 book, Thinking Fast and Slow.

Let’s do some mental arithmetic. What’s eight times four? Easy question, yes? OK, how about 17 times 83? For most of us, that one is quite a bit tougher. According to Kahneman’s research, two-digit multiplication is about the limit of cognitive effort most people can handle. He suggests going for a walk with a friend and, in the middle of your casual conversation, throw in this multiplication question. Your companion will slow down or stop altogether while he or she computes the problem. Our brains don’t have the spare capacity to do both the walking and calculating at the same time.

This is what Kahneman categorizes as our brain’s System 2, the system we use when we actually have to concentrate and think through a problem. Unfortunately, our brains have a tremendously slow processing speed and a tiny working memory, so System 2 is both energy and time consuming, and we are virtually blind to anything outside our areas of concentration when we’re using it. (Google “gorilla on a basketball court.”)

The brain doesn’t like to use System 2 unless it really has to. For most of history there was a good chance you’d get eaten while using it. What we usually use when we think we are thinking is System 1, a mishmash of cognitive shortcuts and rules of thumb that allow us to make snap decisions. They aren’t always great decisions but were good enough to keep our ancestors alive long enough to reproduce.

Since then we’ve used our hard-thinking System 2 to build a civilization in which, for the most part, all that’s left for our paranoid, conclusion-jumping, group-thinking System 1 to do is to make mistakes. Cognitive enhancement, through technological or biological means, aims to upgrade our System 2. The easier and more quickly we can actually think things through, the less we’ll rely on our ancient and disastrously error-prone brain shortcuts.

Certainly there will be some who prefer the comfort of business as usual, to maintain the imagined safety of the status quo. The problem with this strategy is that the status doesn’t stay quoed. We live in a dynamic world. Climates change, economies crumble, pathogens mutate, asteroids impact. There’s always some new unknown rising up to bite us on the bum, and every solution brings its own problems that require even more solutions in a never-ending cycle. Greater intelligence (and a lot of luck) is, as it always has been, our only slim hope for survival in a treacherously indifferent universe.

Of course, intelligence by itself is not sufficient. It’s an evolutionary adaptation like three-color vision or opposable thumbs, just a tool to help our monkey ancestors survive, and like all tools it can be used to build or destroy. Intelligence doesn’t guarantee wisdom, compassion, or initiative. (I’ve known very smart people you couldn’t reliably send to the store for a carton of milk.) Intelligence is, however, a necessary condition for the application of those virtues, a self-adapting tool that enables us to be not only smarter monkeys but better people.

It will not be we who reach Alpha Centauri and the other nearby stars. It will be a species very like us, but with more of our strengths and fewer of our weaknesses. Let’s get on with that, shall we?