April 19, 2014

Sign Up For My New Course On Transhumanism!

From May 1 to 31, I'll be teaching an online course on transhumanism.

This course introduces the philosophy and socio-cultural movement that is transhumanism. We will survey its core ideas, history, technological requirements, potential manifestations, and ethical implications. Topics to be discussed will include the various ways humans have tried to enhance themselves throughout history, the political and social aspects of transhumanism, the technologies required to enhance humans (including cybernetics, pharmaceuticals, genetics, and nanotechnology), and the various ways humans may choose to use these technologies to modify and augment their capacities (including radical life extension, intelligence augmentation, and mind uploading). Along the way we will discuss social and ethical problems that might be posed by human enhancement.

Register here.

20 Crucial Terms Every 21st Century Futurist Should Know

20 Crucial Terms Every 21st Century Futurist Should KnowWe live in an era of accelerating change, when scientific and technological advancements are arriving rapidly. As a result, we are developing a new language to describe our civilization as it evolves. Here are 20 terms and concepts that you'll need to navigate our future.
Back in 2007 I put together a list of terms every self-respecting futurist should be familiar with. But now, some seven years later, it's time for an update. I reached out to several futurists, asking them which terms or phrases have emerged or gained relevance since that time. These forward-looking thinkers provided me with some fascinating and provocative suggestions — some familiar to me, others completely new, and some a refinement of earlier conceptions. Here are their submissions, including a few of my own.

1. Co-veillance

20 Crucial Terms Every 21st Century Futurist Should KnowFuturist and scifi novelist David Brin suggested this one. It's kind of a mash-up between Steve Mann's sousveillance and Jamais Cascio's Participatory Panopticon, and a furtherance of his own Transparent Society concept. Brin describes it as: "reciprocal vision and supervision, combining surveillance with aggressively effective sousveillance." He says it's "scrutiny from below." As Brin told io9:
Folks are rightfully worried about surveillance powers that expand every day. Cameras grow quicker, better, smaller, more numerous and mobile at a rate much faster than Moore's Law (i.e. Brin's corollary). Liberals foresee Big Brother arising from an oligarchy and faceless corporations, while conservatives fret that Orwellian masters will take over from academia and faceless bureaucrats. Which fear has some validity? All of the above. While millions take Orwell's warning seriously, the normal reflex is to whine: "Stoplooking at us!" It cannot work. But what if, instead of whining, we all looked back? Countering surveillance with aggressively effective sousveillance — or scrutiny from below? Say by having citizen-access cameras in the camera control rooms, letting us watch the watchers?
Brin says that reciprocal vision and supervision will be hard to enact and establish, but that it has one advantage over "don't look at us" laws, namely that it actually has a chance of working. (Image credit: 24Novembers/Shutterstock)

2. Multiplex Parenting

This particular meme — suggested to me by the Institute for the Future's Distinguished Fellow Jamais Cascio — has only recently hit the radar. "It's in-vitro fertilization," he says, "but with a germline-genetic mod twist." Recently sanctioned by the UK, this is the biotechnological advance where a baby can have three genetic parents via sperm, egg, and (separately) mitochondria. It's meant as a way to flush-out debilitating genetic diseases. But it could also be used for the practice of human trait selection, or so-called "designer babies". The procedure is currently being reviewed for use in the United States. The era of multiplex parents has all but arrived.
3. Technological Unemployment
20 Crucial Terms Every 21st Century Futurist Should Know
Futurist and scifi novelist Ramez Naam says we should be aware of the potential for "technological unemployment." He describes it as unemployment created by the deployment of technology that can replace human labor. As he told io9,
For example, the potential unemployment of taxi drivers, truck drivers, and so on created by self-driving cars. The phenomenon is an old one, dating back for centuries, and spurred the original Luddite movement, as Ned Ludd is said to have destroyed knitting frames for fear that they would replace human weavers. Technological unemployment in the past has been clearly outpaced (in the long term) by the creation of new wealth from automation and the opening of new job niches for humans, higher in levels of abstraction. The question in the modern age is whether the higher-than-ever speed of such displacement of humans can be matched by the pace of humans developing new skills, and/or by changes in social systems to spread the wealth created.
Indeed, the potential for robotics and AI to replace workers of all stripes is significant, leading to worries of massive rates of unemployment and subsequent social upheaval. These concerns have given rise to another must-know term that could serve as a potential antidote: guaranteed minimum income. (Image credit: Ociacia/Shutterstock)

4. Substrate-Autonomous Person

20 Crucial Terms Every 21st Century Futurist Should Know
In the future, people won't be confined to their meatspace bodies. This is what futurist and transhumanist Natasha Vita-More describes as the "Substrate-Autonomous Person." Eventually, she says, people will be able to form identities in numerous substrates, such as using a "platform diverse body" (a future body that is wearable/usable in the physical/material world — but also exists in computational environments and virtual systems) to route their identity across the biosphere, cybersphere, and virtual environments.
"This person would form identities," she told me. 
"But they would consider their personhood, or sense of identity, to be associated with the environment rather than one exclusive body." Depending on the platform, the substrate-autonomous person would upload and download into a form or shape (body) that conforms to the environment. So, for a biospheric environment, the person would use a biological body, for the Metaverse, a person would use an avatar, and for virtual reality, the person would use a digital form.

5. Intelligence Explosion

20 Crucial Terms Every 21st Century Futurist Should KnowIt's time to retire the term 'Technological Singularity.' The reason, says the Future of Humanity Institute's Stuart Armstrong, is that it has accumulated far too much baggage, including quasi-religious connotations. It's not a good description of what might happen when artificial intelligence matches and then exceeds human capacities, he says. What's more, different people interpret it differently, and it only describes a limited aspect of much broader concept. In its place, Armstrong says we should use a term devised by the computer scientist I. J. Good back in 1967: the "Intelligence explosion." As Armstrong told io9,
It describes the apparent sudden increase in the intelligence of an artificial system such as an AI. There are several scenarios for this: it could be that the system radically self improves itself, finding that as it becomes more intelligent, it's easier for it to become more intelligent still. But it could also be that human intelligence clusters pretty close in mindspace, so a slowly improving AI could shoot rapidly across the distance that separates the village idiot from Einstein. Or it could just be that there are strong skill returns to intelligence, so that an entity need only be slightly more intelligent that humans to become vastly more powerful. In all cases, the fate of life on Earth is likely to be shaped mainly by such "super-intelligences".
Image credit: sakkmesterke/Shutterstock.

6. Longevity Dividend

While many futurists extol radical life extension on humanitarian grounds, few consider the astounding fiscal benefits that are to be had through the advent of anti-aging biotechnologies. The Longevity Dividend, as suggested to me by bioethicist James Hughes of the IEET, is the "assertion by biogerontologists that the savings to society of extending healthy life expectancy with therapies that slow the aging process would far exceed the cost of developing and providing them, or of providing additional years of old age assistance." Longer healthy life expectancy would reduce medical and nursing expenditures, argues Hughes, while allowing more seniors to remain independent and in the labor force. No doubt, the corporate race toprolong life is heating up in recognition of the tremendous amounts of money to be made — and saved — through preventative medicines.

7. Repressive Desublimation

This concept was suggested by our very own Annalee Newitz, editor-in-chief of io9 and author of Scatter, Adapt And Remember. The idea of repressive desublimation was first developed by by political philosopher Herbert Marcuse in his groundbreaking book Eros and Civilization. Newitz says:
It refers to the kind of soft authoritarianism preferred by wealthy, consumer culture societies that want to repress political dissent. In such societies, pop culture encourages people to desublimate or express their desires, whether those are for sex, drugs or violent video games. At the same time, they're discouraged from questioning corporate and government authorities. As a result, people feel as if they live in a free society even though they may be under constant surveillance and forced to work at mind-numbing jobs. Basically, consumerism and so-called liberal values distract people from social repression.

8. Intelligence Amplification

20 Crucial Terms Every 21st Century Futurist Should Know
Sometimes referred to as IA, this is a specific subset of human enhancement — the augmentation of human intellectual capabilities via technology. "It is often positioned as either a complement to or a competitor to the creation of Artificial Intelligence," says Ramez Naam. "In reality there is no mutual exclusion between these technologies." Interestingly, Naam says IA could be a partial solution to the problem of technological unemployment — as a way for humans, or posthumans, to "keep up" with advancing AI and to stay in the loop.

9. Effective Altruism

This is another term suggested by Stuart Armstrong. He describes it as
the application of cost-effectiveness to charity and other altruistic pursuits. Just as some engineering approaches can be thousands of times more effective at solving problems than others, some charities are thousands of time more effective than others, and some altruistic career paths are thousands of times more effective than others. And increased efficiency translates into many more lives saved, many more people given better outcomes and opportunities throughout the world. It is argued that when charity can be made more effective in this way, it is a moral duty to do so: inefficiency is akin to letting people die.

10. Moral Enhancement

On a somewhat related note, James Hughes says moral enhancement is another must-know term for futurists of the 21st Century. Also known as virtue engineering, it's the use of drugs and wearable or implanted devices to enhance self-control, empathy, fairness, mindfulness, intelligence and spiritual experiences.

11. Proactionary Principle

This one comes via Max More, president and CEO of the Alcor Life Extension Foundation. It's an interesting and obverse take on the precautionary principle. "Our freedom to innovate technologically is highly valuable — even critical — to humanity," he told io9. "This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people's freedom to experiment, innovate, and progress."

12. Mules

Jamais Cascio suggested this term, though he admits it's not widely used. Mules are unexpected events — a parallel to Black Swans — that aren't just outside of our knowledge, but outside of our understanding of how the world works. It's named after Asimov's Mule from the Foundation series.

13. Anthropocene

20 Crucial Terms Every 21st Century Futurist Should Know
Another must-know term submitted by Cascio, described as "the current geologic age, characterized by substantial alterations of ecosystems through human activity." (Image credit: NASA/NOAA).

14. Eroom's Law

Unlike Moore's Law, where things are speeding up, Eroom's Law describes — at least in the pharmaceutical industry — things that are slowing down (which is why it's Moore's Law spelled backwards). Ramez Naam says the rate of new drugs developed per dollar spent by the industry has dropped by roughly a factor of 100 over the last 60 years. "Many reasons are proposed for this, including over-regulation, the plucking of low-hanging fruit, diminishing returns of understanding more and more complex systems, and so on," he told io9.

15. Evolvability Risk

Natasha Vita-More describes this as the ability of a species to produce variants more apt or powerful than those currently existing within a species:
One way of looking at evolvability is to consider any system — a society or culture, for example, that has evolvable characteristics. Incidentally, it seems that today's culture is more emergent and mutable than physiological changes occurring in human biology. In the course of a few thousand years, human tools, language, and culture have evolved manifold. The use of tools within a culture has been shaped by the culture and shows observable evolvability-from stones to computers-while human physiology has remained nearly the same.

16. Artificial Wombs

"This is any device, whether biological or technological, that allows humans to reproduce without using a woman's uterus," says Annalee Newitz. Sometimes called a "uterine replicator," she says these devices would liberate women from the biological difficulties of pregnancy, and free the very act of reproduction from traditional male-female pairings. "Artificial wombs might develop alongside social structures that support families with more than two parents, as well as gay marriage," says Newitz.

17. Whole Brain Emulations

Whole brain emulations, says Stuart Armstrong, are human brains that have been copied into a computer, and that are then run according to the laws of physics, aiming to reproduce the behaviour of human minds within a digital form. As he told io9,
They are dependent on certain (mild) assumptions on how the brain works, and requires certain enabling technologies, such as scanning devices to make the original brain model, good understanding of biochemistry to run it properly, and sufficiently powerful computers to run it in the first place. There are plausible technology paths that could allow such emulations around 2070 or so, with some large uncertainties. If such emulations are developed, they would revolutionise health, society and economics. For instance, allowing people to survive in digital form, and creating the possibility of "copyable human capital": skilled, trained and effective workers that can be copied as needed to serve any business purpose.
Armstrong says this also raises great concern over wages, and over the eventual deletion of such copies.

18. Weak AI

Ramez Naam says this term has gone somewhat out of favor, but it's still a very important one. It refers to the vast majority of all 'artificial intelligence' work that produces useful pattern matching or information processing capabilities, but with no bearing on creating a self-aware sentient being. "Google Search, IBM's Watson, self-driving cars, autonomous drones, face recognition, some medical diagnostics, and algorithmic stock market traders are all examples of 'weak AI'," says Naam. "The large majority of all commercial and research work in AI, machine learning, and related fields is in 'weak AI'."
Naam argues that this trend — and the motivations for it — is one of the arguments for the Singularity being further than it appears.

19. Neural Coupling

Imagine the fantastic prospect of creating interfaces that connect the brains of two (or more) humans. Already today, scientists have created interfaces that allow humans to move the limb — or in this case, the tail — of another animal. At first, these technologies will be used for therapeutic purposes; they could be used to help people relearn how to use previously paralyzed limbs. More radically, it could eventually be used for recreational purposes. Humans could voluntarily couple themselves and move each other's body parts.

20. Computational Overhang

This refers to any situation in which new algorithms can suddenly and dramatically exploit existing computational power far more efficiently than before. This is likely to happen when tons of computational power remains untapped, and when previously used algorithms were suboptimal. This is an important concept as far as the development of AGI (artificial general intelligence) is concerned. As noted by Less Wrong, it
signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.
Luke Muehlhauser from the Machine Intelligence Research Institute (MIRI) describes it this way:
Suppose that computing power continues to double according to Moore's law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a 'computing overhang': tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.

Top image via NEOGAF.
This article originally appeared at io9.

February 21, 2014

Bioengineered monkeys with human genetic diseases have almost arrived — and that's awful

Looking to create more accurate experimental models for human diseases, biologists have created transgenic monkeys with "customized" mutations. It's considered a breakthrough in the effort to produce more human-like monkeys — but the ethics of all this are dubious at best.
Yup, scientists know that mice models suck. Though they're used in nearly 60% of all experiments, they're among the most unreliable test subjects when it comes to approximating human biological processes (what the hell is an autistic mouse, anyway)?
Great apes, like chimpanzees and bonobos, obviously make for better test subjects. But given how close these animals are to humans in terms of their cognitive and emotional capacities, they're increasingly being seen as ethically inappropriate models for experiments. Indeed, medical experiments on apes are on the way out. There's currently a great ape research ban in the Netherlands, New Zealand, the United Kingdom, Sweden, Germany, and Austria (where it's also illegal to test on lesser apes, like gibbons). In the US, where there are still over 1,200 chimps used for biomedical research, the NIH has decided to stop the practice.

Monkey in the Middle

Regrettably, all this is making monkeys increasingly vulnerable to medical testing. Given that they're primates, and that their brains and bodies are so closely related to our own, they're the logical substitute. But it's for these same reasons that they shouldn't be used in the first place.
Making matters worse, researchers are now actively trying to humanize monkeys by using gene-editing technologies, specifically the CRISPR/Cas9 system. In the latest "breakthrough," Chinese researchers successfully produced twin cynomolgus monkeys with two separate mutations, one that helps regulate metabolism, and one involved in healthy immune function.
For the most part, these monkeys are okay (setting aside the fact that they're lab monkeys who will experimented upon for the rest of their lives). But it's an important proof-of-concept that will result in more advanced precision gene-editing techniques. Eventually, researchers will be able to create monkeys with more serious conditions. More serious human conditions — like autism, schizophrenia, Alzheimer's, and severe immune dysfunction.
"We need some non-human primate models," said stem-cell biologist Hideyuki Okano in a recent Nature News article. The reason, he says, is that human neuropsychiatric disorders are particularly difficult to replicate in the simple nervous systems of mice.
That's right — monkeys with human neuropsychiatric disorders.

Where's the Ethics?

Speaking of that Nature News article — and I'm not trying to pick on them because many science journals tend to gloss over the ethical aspects of this sort of research — their coverage of this news was utterly distasteful, to say the least. Here's how they packaged it:

Awww, so adorable. Let's gush over how cute they are, but then talk about how psychologically deranged we're going to make them.
Thankfully, this breakthrough comes at a time when it's becoming (slightly) more difficult for scientists to experiment on monkeys. Back in 2012, United Airlines announced that it would stop transporting research monkeys — eliminating the last North American air carrier still available to primate researchers. Moreover, there are other options for scientists when it comes to research. 
In closing, and in the words of animal rights advocate Peter Singer, "Animals are an end unto themselves because their suffering matters."
Image: jeep2499/Shutterstock.
This article originally appeared at io9. 

February 17, 2014

Why You Should Upload Yourself to a Supercomputer


We're still decades — if not centuries — away from being able to transfer a mind to a supercomputer. It's a fantastic future prospect that makes some people incredibly squeamish. But there are considerable benefits to living a digital life. Here's why you should seriously consider uploading.

As I've pointed out before, uploading is not a given; there are many conceptual, technological, ethical, and security issues to overcome. But for the purposes of this Explainer, we're going to assume that uploads, or digital mind transfers, will eventually be possible — whether it be from the scanning and mapping of a brain, serial brain sectioning, brain imaging, or some unknown process.

Indeed, it's a prospect that's worth talking about. Many credible scientists, philosophers, and futurists believe there's nothing inherently intractable about the process. The human brain — an apparent substrate independent Turing Machine — adheres to the laws of physics in a material universe. Eventually, we'll be able to create a model of it using non-biological stuff — and even convert, or transfer, existing analog brains to digital ones.

So, assuming you'll live long enough to see it — and muster up the courage to make the paradigmatic leap from meatspace to cyberspace — here's what you have to look forward to:

An End to Basic Biological Functions

Once you're living as a stream of 1's and 0's you'll never have to worry about body odor, going to the bathroom, or having to brush your teeth. You won't need to sleep or have sex — unless, of course, you program yourself such that you'll both want and need to do these things (call it a purist aesthetic choice).



At the same time, you won't have to worry about rising cholesterol levels, age-related disorders, and broken bones. But you will have to worry about viruses (though they'll be of a radically different sort), hackers, and unhindered access to processing power.

Radically Extended Life

The end of an organic, biological human life will offer the potential for an indefinitely long one. For many, virtual immortality will be the primary appeal of uploading. So long as the supercomputer in which you reside is secure and safe (e.g. planning an exodus from the solar system when the Sun enters into its death throes), you should be able to live until the universe collapses in the Big Rip — something that shouldn't happen for another 22 billion years.

Creating Backup Copies

I spoke to futurist John Smart about this one. He's someone who's actually encouraging the development of technologies required for brain preservation and uplift. To that end, he's the Vice President of the Brain Preservation Foundation, a not-for-profit research group working to evaluate — and award — a number of scanning a preservation strategies.



Smart says it's a good idea to create an upload as a backup for your bioself while you're still alive.

"We are really underthinking the value of this," he told io9. "With molecular-scale MRI, which may be possible for large tissue samples in a few decades, and works today for a few cubic nanometers, people may do nondestructive self-scanning (uploading) of their brains while they are alive, mid- to late-21st century."

Smart says that if he had such a backup on file, he would be far more zen about his own biological death.

"I could see whole new philosophical movements opening up around this," he says. "Would you run your upload as an advisor/twin while you are alive? Or just keep him as your backup, to boot up whenever you choose to leave biolife, for whatever personal reasons? I think people will want both choices, and both options will be regularly chosen."

Making Virtually Unlimited Copies of Yourself

Related to the previous idea, we could also create an entire armada of ourselves for any number of purposes.



"The ability to make arbitrary numbers of copies of yourself, to work on tough problems, or try out different personal life choice points, and to reintegrate them later, or not, as you prefer, will be a great new freedom of uploads," says Smart. "This happens already when we argue with ourselves. We are running multiple mindset copies — and we must be careful with that, as it can sometimes lead to dissociative personality disorder when combined with big traumas — but in general, multiple mindsets for people, and multiple instances of self, will probably be a great new capability and freedom."

Smart points to the fictional example of Jamie Madrox, aka Multiple Man, the comic book superhero who can create, and later reabsorb, "dupes" of himself, with all their memories and experiences.

Dramatically Increased Clock Speed

Aside from indefinite lifespans, this may be one of the sweetest aspects of uploading. Living in a supercomputer would be like Neo operating in Bullet Time or small animals who perceive the world in slow-motion relative to humans. Living in a supercomputer, we could do more thinking, get more done, and experience more compared to wetware organisms functioning in "real time." And best of all, this will significantly increase the amount of relative time we can have in the Universe before it comes to grinding halt.



"I think the potential for increased clock speeds is the central reason why uploads are the next natural step for leading edge intelligence on Earth," says Smart. "We seem to be rushing headlong to virtual and physical "inner space."

Radically Reduced Global Footprints

Uploading is also environmentally friendly, something that could help us address our perpetually growing population — especially in consideration of radical life extension at the biological level. In fact, transferring our minds to digital substrate may actually be a matter of necessity. Sure, we'll need powerful supercomputers to run the billions — if not trillions — of individual digital experiences, but the relatively low power requirements and reduced levels of fossil fuel emissions simply can't compare to the burden we impose on the planet with our corporeal civilization.

Intelligence Augmentation

It'll also be easier to enhance our intelligence when we're purely digital. Trying to boost the cognitive power of a biological brain is prohibitively difficult and dangerous. A digital mind, on the other hand, would be flexible, robust, and easy to repair. Augmented virtual minds could have higher IQ-type intelligence, enhanced memory, and increased attention spans. We'll need to be very careful about going down this path, however, as it could lead to an out-of-control transcending upload — or even insanity.

Designer Psychologies

Uploads will also enable us to engineer and assume any number of alternative psychological modalities. Human experience is currently dominated by the evolutionary default we call neurotypicality, though outliers exist along the autistic spectrum and other so-called psychological "disorders." These customized cognitive processing frameworks will allow uploaded individuals to selectively alter the specific and unique ways in which they absorb, analyze, and perceive the world, allowing for variation in subjectivity, social engagement, aesthetics, and biases. These frameworks could also be changed on the fly, allowing uploads to change their frameworks depending on the context. Or just to try it out and feel like another person.

Enhanced Emotion Control

Somewhat related to the last one, uploaded individuals will also be able to monitor, regulate, and choose the state of their subjective well-being and emotional state, including levels of happiness.



Uploads could default to the normal spectrum of human emotion, or choose to operate within a predefined band of emotional variability — including, more conceptually, the introduction of new emotions altogether. Safety mechanisms could be built-in to prevent a person from spiraling into a state of debilitating depression — or a state of perpetual bliss, unless that's precisely what the upload is seeking.

A Better Hive Mind

The ability to link biological minds to create a kind of technologically-enabled telepathy, or techlepathy, is probably possible. But as I've pointed out before, it'll be exceptionally difficult and messy. A fundamental problem will be to translate signals, or thoughts, in a sensible way such that each person in the link-up has the same mental representation for a given object or concept. This translation problem could be overcome by developing standard brain-to-brain communication protocols, or by developing innate translation software. And of course, because all the minds are in the same computer, establishing communication links will be a breeze.

Toying WIth Alternative Physics

Quite obviously, uploads will be able to live in any number of virtual reality environments. These digital worlds will be like souped-up and fully immersive versions of Second Life or World of Warcraft. But why limit ourselves to the physics of the Known Universe when we can tweak it any number of ways? Uploads could add or take away physical dimensions, lower the effect of gravity, increase the speed of light, and alter the effects of electromagnetism. All bets are off in terms of what's possible and the kind of experiences that could be had. By comparison, life in the analog world will seem painfully limited and constrained.

Downloading to an External Body

Now, just because you've uploaded yourself to a supercomputer doesn't mean you have to stay there. Individuals will always have the option of downloading themselves into a robotic or cyborg body, even if it's just temporary. But as portrayed in Greg Egan's scifi classic, Diaspora, these ventures outside the home supercomputer will come with a major drawback — one that's closely tied to the clock speed issue: Every moment a person spends in the real, analog world will be equivalent to months or even years in the virtual world. Subsequently, you'll need to be careful about how much time you spend off the grid.

Interstellar Space Travel

As futurist Giulio Prisco has noted, it probably makes most sense to send uploaded astronauts on interstellar missions. He writes:

The very high cost of a crewed space mission comes from the need to ensure the survival and safety of the humans on board and the need to travel at extremely high speeds to ensure it's done within a human lifetime. One way to overcome that is to do without the wetware bodies of the crew, and send only their minds to the stars - their "software" — uploaded to advanced circuitry, augmented by AI subsystems in the starship's processing system...An e-crew — a crew of human uploads implemented in solid-state electronic circuitry — will not require air, water, food, medical care, or radiation shielding, and may be able to withstand extreme acceleration. So the size and weight of the starship will be dramatically reduced.

Tron Legacy concept art by David Levy.

This article originally appeared at io9.

Can we build an artificial superintelligence that won't kill us?


At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? We spoke to an expert to find out.

Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI) — a group that's dedicated to figuring out the various ways we might be able to build friendly smarter-than-human intelligence. Recently, Muehlhauser coauthored a paper with the Future of Humanity Institute's Nick Bostrom on the need to develop friendly AI.
io9: How did you come to be aware of the friendliness problem as it relates to artificial superintelligence (ASI)?
Muehlhauser: Sometime in mid-2010 I stumbled across a 1965 paper by I.J. Good, who worked with Alan Turing during World War II to decipher German codes. One paragraph in particular stood out:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.
I didn't read science fiction, and I barely knew what "transhumanism" was, but I immediately realized that Good's conclusion followed directly from things I already believed, for example that intelligence is a product of cognitive algorithms, not magic. I pretty quickly realized that the intelligence explosion would be the most important event in human history, and that the most important thing I could do would be to help ensure that the intelligence explosion has a positive rather than negative impact — that is, that we end up with a "Friendly" superintelligence rather than an unfriendly or indifferent superintelligence.
Initially, I assumed that the most important challenge of the 21st century would have hundreds of millions of dollars in research funding, and that there wouldn't be much value I could contribute on the margin. But in the next few months I learned to my shock and horror that that fewer than five people in the entire world had devoted themselves full-time to studying the problem, and they had almost no funding. So in April 2011 I quit my network administration job in Los Angeles and began an internship with MIRI, to learn how I might be able to help. It turned out the answer was "run MIRI," and I was appointed MIRI's CEO in November 2011.
Spike Jonze's latest film, Her, has people buzzing about artificial intelligence. What can you tell us about the portrayal of AI in that movie and how it would compare to artificial superintelligence?
Her is a fantastic film, but its portrayal of AI is set up to tell a good story, not to be accurate. The director, Spike Jonze, didn't consult with computer scientists when preparing the screenplay, and this will be obvious to any computer scientists who watch the film.
Without spoiling too much, I'll just say that the AIs in Her, if they existed in the real world, would entirely transform the global economy. But in Her, the introduction of smarter-than-human, self-improving AIs doesn't upset the status quo hardly at all. As economist Robin Hanson commented on Facebook:
Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only effects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one every fears it might. That's how I feel watching the movie Her.
AI theorists like yourself warn that we may eventually lose control of our machines, a potentially sudden and rapid transition driven by two factors, computing overhang and recursive self-improvement. Can you explain each of these?
It's extremely difficult to control the behavior of a goal-directed agent that is vastly smarter than you are. This problem is much harder than a normal (human-human) principal-agent problem.
If we got to tinker with different control methods, and make lots of mistakes, and learn from those mistakes, maybe we could figure out how to control a self-improving AI with 50 years of research. Unfortunately, it looks like we may not have the opportunity to make so many mistakes, because the transition from human control of the planet to machine control might be surprisingly rapid. Two reasons for this are computing overhang and recursive self-improvement.
In our paper, my coauthor (Oxford's Nick Bostrom) and I describe computing overhang this way:
Suppose that computing power continues to double according to Moore's law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a 'computing overhang': tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.
Another reason for a rapid transition from human control to machine control is the one first described by I.J. Good, what we now call recursive self-improvement. An AI with general intelligence would correctly realize that it will be better able to achieve its goals — whatever its goals are — if it does original AI research to improve its own capabilities. That is, self-improvement is a "convergent instrumental value" of almost any "final" values an agent might have, which is part of why self-improvement books and blogs are so popular. Thus, Bostrom and I write:
When we build an AI that is as skilled as we are at the task of designing AI systems, we may thereby initiate a rapid, AI-motivated cascade of self-improvement cycles. Now when the AI improves itself, it improves the intelligence that does the improving, quickly leaving the human level of intelligence far behind.
Some people believe that we'll have nothing to fear from advanced AI out of a conviction that something so astoundingly smart couldn't possibly be stupid or mean enough to destroy us. What do you say to people who believe an SAI will be naturally more moral than we are?
In AI, the system's capability is roughly "orthogonal" to its goals. That is, you can build a really smart system aimed at increasing Shell's stock price, or a really smart system aimed at filtering spam, or a really smart system aimed at maximizing the number of paperclips produced at a factory. As you improve the intelligence of the system, or as it improves its own intelligence, its goals don't particularly change — rather, it simply gets better at achieving whatever its goals already are.
There are some caveats and subtle exceptions to this general rule, and some of them are discussed in Bostrom (2012). But the main point is that we shouldn't stake the fate of the planet on a risky bet that all mind designs we might create eventually converge on the same moral values, as their capabilities increase. Instead, we should fund lots of really smart people to think hard about the general challenge of superintelligence control, and see what kinds of safety guarantees we can get with different kinds of designs.
Why can't we just isolate potentially dangerous AIs and keep them away from the Internet?
Such "AI boxing" methods will be important during the development phase of Friendly AI, but it's not a full solution to the problem for two reasons.
First, even if the leading AI project is smart enough to carefully box their AI, the next five AI projects won't necessarily do the same. There will be strong incentives to let one's AI out of the box, if you think it might (e.g.) play the stock market for you and make you billions of dollars. Whatever you built the AI to do, it'll be better able to do it for you if you let it out of the box. Besides, if you don't let it out of the box, the next team might, and their design might be even more dangerous.
Second, AI boxing pits human intelligence against superhuman intelligence, and we can't expect the former to prevail indefinitely. Humans can be manipulated, boxes can be escaped via surprising methods, etc. There's a nice chapter on this subject in Bostrom's forthcoming book from Oxford University Press, titled Superintelligence: Paths, Dangers, Strategies.
Still, AI boxing is worth researching, and should give us a higher chance of success even if it isn't an ultimate solution to the superintelligence control problem.
It has been said that an AI 'does not love you, nor does it hate you, but you are made of atoms it can use for something else.' The trick, therefore, will be to program each and every ASI such that they're "friendly" or adhere to human, or humane, values. But given our poor track record, what are some potential risks of insisting that superhuman machines be made to share all of our current values?
I really hope we can do better than programming an AI to share (some aggregation of) current human values. I shudder to think what would have happened if the Ancient Greeks had invented machine superintelligence, and given it some version of their most progressive moral values of the time. I get a similar shudder when I think of programming current human values into a machine superintelligence.
So what we probably want is not a direct specification of values, but rather some algorithm for what's called indirect normativity. Rather than programming the AI with some list of ultimate values we're currently fond of, we instead program the AI with some process for learning what ultimate values it should have, before it starts reshaping the world according to those values. There are several abstract proposals for how we might do this, but they're at an early stage of development and need a lot more work.
In conjunction with the Future of Humanity Institute at Oxford, MIRI is actively working to address the unfriendliness problem — even before we know anything about the design of future AIs. What's your current strategy?
Yes, as far as I know, only MIRI and FHI are funding full-time researchers devoted to the superintelligence control problem. There's a new group at Cambridge University called CSER that might hire additional researchers to work on the problem as soon as they get funding, and they've gathered some really top-notch people as advisors — including Stephen Hawking and George Church.
FHI's strategy thus far has been to assemble a map of the problem and our strategic situation with respect to it, and to try to get more researchers involved, e.g. via the AGI Impacts conference in 2012.
MIRI works closely with FHI and has also done this kind of "strategic analysis" research, but we recently decided to specialize in Friendly AI math research, primarily via math research workshops tackling various sub-problems of Friendly AI theory. To get a sense of what Friendly AI math research currently looks like, see these results from our latest workshop, and see my post From Philosophy to Math to Engineering.
What's the current thinking on how we can develop an ASI that's both human-friendly and incapable of modifying its core values?
I suspect the solution to the "value loading problem" (how do we get desirable goals into the AI?) will be something that qualifies as an indirect normativity approach, but even that is hard to tell at this early stage.
As for making sure the system keeps those desirable goals even as it modifies its core algorithms for improved performance — well, we're playing with toy models of that problem via the "tiling agents" family of formalisms, because toy models are a common method for making research progress on poorly-understood problems, but the toy models are very far from how a real AI would work.
How optimistic are you that we can solve this problem? And how could we benefit from a safe and friendly ASI that's not hell bent on destroying us?
The benefits of Friendly AI would be literally astronomical. It's hard to say how something much smarter than me would optimize the world if it were guided by values more advanced than my own, but I think an image that evokes the appropriate kind of sentiment would be: self-replicating spacecraft planting happy, safe, flourishing civilizations throughout our galactic supercluster — that kind of thing.
Superintelligence experts — meaning, those who research the problem full-time, and are familiar with the accumulated evidence and arguments for and against various positions on the subject — have differing predictions about whether humanity is likely to solve the problem.
As for myself, I'm pretty pessimistic. The superintelligence control problem looks much harder to solve than, say, the global risks from global warming or synthetic biology, and I don't think our civilization's competence and rationality are improving quickly enough for us to be able to solve the problem before the first machine superintelligence is built. But this hypothesis, too, is one that can be studied to improve our predictions about it. We took some initial steps in studying this question of "civilization adequacy" here.
Top: Andrea Danti/Shutterstock.
This article originally appeared at io9.