╌>

Would 'artificial superintelligence' lead to the end of life on Earth? It's not a stupid question

  

Category:  Health, Science & Technology

Via:  john-russell  •  2 years ago  •  23 comments

By:   Timnit Gebru (Alternet. org)

Would 'artificial superintelligence' lead to the end of life on Earth? It's not a stupid question
There is a lot of talk lately about how dangerous it would be to unleash real AI on the world. A program that thinks for itself might become hell-bent on self preservation, and in its wisdom may conclude that the best way to save itself is to destroy civilization as we know it. Will it flood the internet with viruses and erase our data? Will it crash global financial markets and empty our bank accounts? Will it create robots that enslave all of humanity? Will it trigger global thermonuclear...

S E E D E D   C O N T E N T



Photo by Alex Knight on Unsplash white robot near brown wall Emile P. Torres, SalonAugust 07, 2022Bank

The activist group Extinction Rebellion has been remarkably successful at raising public awareness of the ecological and climate crises, especially given that it was established only in 2018.

The dreadful truth, however, is that climate change isn't the only global catastrophe that humanity confronts this century. Synthetic biology could make it possible to create designer pathogens far more lethal than COVID-19, nuclear weapons continue to cast a dark shadow on global civilization and advanced nanotechnology could trigger arms races, destabilize societies and "enable powerful new types of weaponry."

Yet another serious threat comes from artificial intelligence, or AI. In the near-term, AI systems like those sold by IBM, Microsoft, Amazon and other tech giants could exacerbate inequality due to gender and racial biases. According to a paper co-authored by Timnit Gebru, the former Google employee who was fired "after criticizing its approach to minority hiring and the biases built into today's artificial intelligence systems," facial recognition software is "less accurate at identifying women and people of color, which means its use can end up discriminating against them." These are very real problems affecting large groups of people that require urgent attention.

But there are also longer-term risks, as well, arising from the possibility of algorithms that exceed human levels of general intelligence. An artificial superintelligence, or ASI, would by definition be smarter than any possible human being in every cognitive domain of interest, such as abstract reasoning, working memory, processing speed and so on. Although there is no obvious leap from current "deep-learning" algorithms to ASI, there is a good case to make that the creation of an ASI is not a matter of if but when: Sooner or later, scientists will figure out how to build an ASI, or figure out how to build an AI system that can build an ASI, perhaps by modifying its own code.

When we do this, it will be the most significant event in human history: Suddenly, for the first time, humanity will be joined by a problem-solving agent more clever than itself. What would happen? Would paradise ensue? Or would the ASI promptly destroy us?

Even a low probability that machine superintelligence leads to "existential catastrophe" presents an unacceptable risk — not just for humans but for our entire planet.

I believe we should take the arguments for why "a plausible default outcome of the creation of machine superintelligence is existential catastrophe" very seriously. Even if the probability of such arguments being correct is low, a risk is standardly defined as the probability of an event multiplied by its consequences. And since the consequences of total annihilation would be enormous, even a low probability (multiplied by this consequence) would yield a sky-high risk.

Even more, the very same arguments for why an ASI could cause the extinction of our species also lead to the conclusion that it could obliterate the entire biosphere. Fundamentally, the risk posed by artificial superintelligence is an environmental risk. It is not just an issue of whether humanity survives or not, but an environmental issue that concerns all earthly life, which is why I have been calling for an Extinction Rebellion-like movement to form around the dangers of ASI — a threat that, like climate change, could potentially harm every creature on the planet.

Although no one knows for sure when we will succeed in building an ASI, one survey of experts found a 50 percent likelihood of "human-level machine intelligence" by 2040 and a 90 percent likelihood by 2075. A human-level machine intelligence, or artificial general intelligence, abbreviated AGI, is the stepping-stone to ASI, and the step from one to the other might be very small, since any sufficiently intelligent system will quickly realize that improving its own problem-solving abilities will help it achieve a wide range of "final goals," or the goals that it ultimately "wants" to achieve (in the same sense that spellcheck "wants" to correct misspelled words).

Furthermore, one study from 2020 reports that at least 72 research projects around the world are currently, and explicitly, working to create an AGI. Some of these projects are just as explicit that they do not take seriously the potential threats posed by ASI. For example, a company called 2AI, which runs the Victor project, writes on its website:

There is a lot of talk lately about how dangerous it would be to unleash real AI on the world. A program that thinks for itself might become hell-bent on self preservation, and in its wisdom may conclude that the best way to save itself is to destroy civilization as we know it. Will it flood the internet with viruses and erase our data? Will it crash global financial markets and empty our bank accounts? Will it create robots that enslave all of humanity? Will it trigger global thermonuclear war? … We think this is all crazy talk.

But is it crazy talk? In my view, the answer is no. The arguments for why ASI could devastate the biosphere and destroy humanity, which are primarily philosophical, are complicated, with many moving parts. But the central conclusion is that by far the greatest concern is the unintended consequences of the ASI striving to achieve its final goals. Many technologies have unintended consequences, and indeed anthropogenic climate change is an unintended consequence of large numbers of people burning fossil fuels. (Initially, the transition from using horses to automobiles powered by internal combustion engines was hailed as a solution to the problem of urban pollution.)

Most new technologies have unintended consequences, and ASI would be the most powerful technology ever created, so we should expect its potential unintended consequences to be massively disruptive.

An ASI would be the most powerful technology ever created, and for this reason we should expect its potential unintended consequences to be even more disruptive than those of past technologies. Furthermore, unlike all past technologies, the ASI would be a fully autonomous agent in its own right,whose actions are determined by a superhuman capacity to secure effective means to its ends, along with an ability to process information many orders of magnitude faster than we can.

Consider that an ASI "thinking" one million times faster than us would see the world unfold in super-duper-slow motion. A single minute for us would correspond to roughly two years for it. To put this in perspective, it takes the average U.S. student 8.2 years to earn a PhD, which amounts to only 4.3 minutes in ASI-time. Over the period it takes a human to get a PhD, the ASI could have earned roughly 1,002,306 PhDs.

This is why the idea that we could simply unplug a rogue ASI if it were to behave in unexpected ways is unconvincing: The time it would take to reach for the plug would give the ASI, with its superior ability to problem-solve, ages to figure out how to prevent us from turning it off. Perhaps it quickly connects to the internet, or shuffles around some electrons in its hardware to influence technologies in the vicinity. Who knows? Perhaps we aren't even smart enough to figure out all the ways it might stop us from shutting it down.

But why would it want to stop us from doing this? The idea is simple: If you give an algorithm some task — a final goal — and if that algorithm has general intelligence, as we do, it will, after a moment's reflection, realize that one way it could fail to achieve its goal is by being shut down. Self-preservation, then, is a predictable subgoal that sufficiently intelligent systems will automatically end up with, simply by reasoning through the ways it could fail.

What, then, if we are unable to stop it? Imagine that we give the ASI the single goal of establishing world peace. What might it do? Perhaps it would immediately launch all the nuclear weapons in the world to destroy the entire biosphere, reasoning — logically, you'd have to say — that if there is no more biosphere there will be no more humans, and if there are no more humans then there can be no more war — and what we told it to do was precisely that, even though what we intended it to do was otherwise.

Fortunately, there's an easy fix: Simply add in a restriction to the ASI's goal system that says, "Don't establish world peace by obliterating all life on the planet." Now what would it do? Well, how else might a literal-minded agent bring about world peace? Maybe it would place every human being in suspended animation, or lobotomize us all, or use invasive mind-control technologies to control our behaviors.

Again, there's an easy fix: Simply add in more restrictions to the ASI's goal system. The point of this exercise, however, is that by using our merely human-level capacities, many of us can poke holes in just about any proposed set of restrictions, each time resulting in more and more restrictions having to be added. And we can keep this going indefinitely, with no end in sight.

Hence, given the seeming interminability of this exercise, the disheartening question arises: How can we ever be sure that we've come up with a complete, exhaustive list of goals and restrictions that guarantee the ASI won't inadvertently do something that destroys us and the environment? The ASI thinks a million times faster than us. It could quickly gain access and control over the economy, laboratory equipment and military technologies. And for any final goal that we give it, the ASI will automatically come to value self-preservation as a crucial instrumental subgoal.

How can we come up with a list of goals and restrictions that guarantee the ASI won't do something that destroys us and the environment? We can't.

Yet self-preservation isn't the only subgoal; so is resource acquisition. To do stuff, to make things happen, one needs resources — and usually, the more resources one has, the better. The problem is that without giving the ASI all the right restrictions, there are a seemingly endless number of ways it might acquire resources that would cause us, or our fellow creatures, harm. Program it to cure cancer: It immediately converts the entire planet into cancer research labs. Program it to solve the Riemann hypothesis: It immediately converts the entire planet into a giant computer. Program it to maximize the number of paperclips in the universe (an intentionally silly example): It immediately converts everything it can into paperclips, launches spaceships, builds factories on other planets — and perhaps, in the process, if there are other life forms in the universe, destroys those creatures, too.

It cannot be overemphasized: an ASI would be an extremely powerful technology. And power equals danger. Although Elon Musk is very often wrong, he was right when he tweeted that advanced artificial intelligence could be "more dangerous than nukes." The dangers posed by this technology, though, would not be limited to humanity; they would imperil the whole environment.

This is why we need, right now, in the streets, lobbying the government, sounding the alarm, an Extinction Rebellion-like movement focused on ASI. That's why I am in the process of launching the Campaign Against Advanced AI, which will strive to educate the public about the immense risks of ASI and convince our political leaders that they need to take this threat, alongside climate change, very seriously.

A movement of this sort could embrace one of two strategies. A "weak" strategy would be to convince governments — all governments around the world — to impose strict regulations on research projects working to create AGI. Companies like 2AI should not be permitted to take an insouciant attitude toward a potentially transformative technology like ASI.

A "strong" strategy would aim to halt all ongoing research aimed at creating AGI. In his 2000 article "Why the Future Doesn't Need Us," Bill Joy, cofounder of Sun Microsystems, argued that some domains of scientific knowledge are simply too dangerous for us to explore. Hence, he contended, we should impose moratoriums on these fields, doing everything we can to prevent the relevant knowledge from being obtained. Not all knowledge is good. Some knowledge poses "information hazards" — and once the knowledge genie is out of the lamp, it cannot be put back in.

Although I am most sympathetic to the strong strategy, I am not committed to it. More than anything, it should be underlined that almost no sustained, systematic research has been conducted on how best to prevent certain technologies from being developed. One goal of the Campaign Against Advanced AI would be to fund such research, to figure out responsible, ethical means of preventing an ASI catastrophe by putting the brakes on current research. We must make sure that superintelligent algorithms are environmentally safe.

If experts are correct, an ASI could make its debut in our lifetimes, or the lifetimes of our children. But even if ASI is far away — or even if it turns out to be impossible to create, which is a possibility — we don't know that for sure, and hence the risk posed by ASI may still be enormous, perhaps comparable to or exceeding the risks of climate change (which are huge). This is why we need to rebel — not later, but now.


Tags

jrDiscussion - desc
[]
 
JohnRussell
Professor Principal
1  seeder  JohnRussell    2 years ago

What is the benefit to humanity of super artificial intelligence?  A million times faster thinking than human beings?  This just sounds like begging for trouble. 

 
 
 
Drinker of the Wry
Senior Expert
1.1  Drinker of the Wry  replied to  JohnRussell @1    2 years ago

Fear of the unknown has always been a human trait.  Will it prevent the next wave of the technological and human progress?

 
 
 
JohnRussell
Professor Principal
1.1.1  seeder  JohnRussell  replied to  Drinker of the Wry @1.1    2 years ago

Imagine that instead of being created by human beings, this hyper algorithm driven  entity of super intelligence came from another planet and arrived on earth able to think a million times faster than us. We would be terrified, would we not? 

 
 
 
Thomas
Masters Guide
1.1.2  Thomas  replied to  JohnRussell @1.1.1    2 years ago

I don't see why you are so scared, John.  If humankind can create it, then humankind can control it.  

Just look at how well we relate to money 

 
 
 
Gordy327
Professor Guide
1.2  Gordy327  replied to  JohnRussell @1    2 years ago

And in unrelated news, the military is developing a new advanced computer system they dub 'Skynet.' 😉

 
 
 
Perrie Halpern R.A.
Professor Expert
1.2.1  Perrie Halpern R.A.  replied to  Gordy327 @1.2    2 years ago

I think that is a smart use of tech, Gordy. 

 
 
 
Drinker of the Wry
Senior Expert
1.2.2  Drinker of the Wry  replied to  Gordy327 @1.2    2 years ago

China's PLA will think of a different name, maybe Long Wang, Dragon King.

 
 
 
TᵢG
Professor Principal
1.3  TᵢG  replied to  JohnRussell @1    2 years ago
What is the benefit to humanity of super artificial intelligence? 

For one, cancer research.   If there is a cure to a form of cancer then how valuable would it be for a super intelligent cyber researcher to be pouring through tons of biological data and medical records, etc. armed with the expertise of a team of seasoned PhD researchers but operating at a pace that would take them centuries?

 
 
 
JohnRussell
Professor Principal
1.3.1  seeder  JohnRussell  replied to  TᵢG @1.3    2 years ago

I think we will solve cancer without going to machines a million times faster than human thought. 

 
 
 
TᵢG
Professor Principal
1.3.2  TᵢG  replied to  JohnRussell @1.3.1    2 years ago

I hope so.   Note however that cancer is a category of diseases, not just one.   So that is quite a tall order.   

Regardless, you asked for the benefit and I gave you one example.   

Here is another benefit.   An ASI can be put to work to solve complex engineering problems like figuring out the optimal use of materials and structure with minimal cost and time for a sophisticated space station design (or, if you prefer, an deep sea exploration facility).

Or this:  use ASI to pour through all available human knowledge (books, papers, etc.) and produce the means for human beings to query world knowledge.   Imagine, if you will, a super Google that truly understands your questions at a semantic level with world context and can answer same drawn from a corpus with a world scope and for all recorded history (as much as is available ... knowledge that is trapped in books that have not been scanned is obviously out of reach)?

 
 
 
Thomas
Masters Guide
1.3.3  Thomas  replied to  TᵢG @1.3.2    2 years ago

Ahhhh, Hari Seldon and those damn Foundation people..... 

 
 
 
Perrie Halpern R.A.
Professor Expert
2  Perrie Halpern R.A.    2 years ago

I think we have to be careful about how we proceed with AI. While supercomputers are a wonderful device for mankind, as this article points out (and many sci-fi writers), there is a danger when the creation is smarter than the designer.

 
 
 
JohnRussell
Professor Principal
2.1  seeder  JohnRussell  replied to  Perrie Halpern R.A. @2    2 years ago

I think there is a good probability that the future will be dystopian at some point, largely because of over reliance on things like artificial intelligence. I think it is almost virtually certain it will be misused. 

 
 
 
Drinker of the Wry
Senior Expert
2.1.1  Drinker of the Wry  replied to  JohnRussell @2.1    2 years ago
I think it is almost virtually certain it will be misused. 

Good pun.

 
 
 
Gordy327
Professor Guide
2.2  Gordy327  replied to  Perrie Halpern R.A. @2    2 years ago
I think we have to be careful about how we proceed with AI. While supercomputers are a wonderful device for mankind, as this article points out (and many sci-fi writers), there is a danger when the creation is smarter than the designer.

The lesson to be had here is, always make sure there is an 'off' switch to your would be super intelligent humanity killing AI.

 
 
 
JohnRussell
Professor Principal
2.2.1  seeder  JohnRussell  replied to  Gordy327 @2.2    2 years ago

Would someone really design artificial super intelligence that could disabled by an off switch? I dont see it. If anything there will many redundancies that will allow the AI to survive sabotage and if necessary power itself. 

 
 
 
Gordy327
Professor Guide
2.2.2  Gordy327  replied to  JohnRussell @2.2.1    2 years ago
Would someone really design artificial super intelligence that could disabled by an off switch?

If they're smart, they would. Or at least have some kind of fail-safe built in.

I dont see it. If anything there will many redundancies that will allow the AI to survive sabotage and if necessary power itself. 

Depends on how it's designed I suppose. Maybe a fail safe cutoff from a power source, just in case.

 
 
 
TᵢG
Professor Principal
2.2.3  TᵢG  replied to  JohnRussell @2.2.1    2 years ago

Yes.   Not literally a switch like a light switch of course.   But every good design provides a means to shut it down.

 
 
 
Perrie Halpern R.A.
Professor Expert
2.2.4  Perrie Halpern R.A.  replied to  Gordy327 @2.2    2 years ago
The lesson to be had here is, always make sure there is an 'off' switch to your would be super intelligent humanity killing AI.

I would have to agree, but if our AI is mobile that could prove to be harder than anticipated. 

 
 
 
Gordy327
Professor Guide
2.2.5  Gordy327  replied to  Perrie Halpern R.A. @2.2.4    2 years ago
but if our AI is mobile that could prove to be harder than anticipated. 

The internet becomes sentient. Actually, that was something of the plot to the classic PS2 game, Metal Gear Solid 2: Sons of Liberty.

 
 
 
TᵢG
Professor Principal
4  TᵢG    2 years ago
Although there is no obvious leap from current "deep-learning" algorithms to ASI, there is a good case to make that the creation of an ASI is not a matter of if but when: Sooner or later, scientists will figure out how to build an ASI, or figure out how to build an AI system that can build an ASI, perhaps by modifying its own code.

Agreed.  If human beings do not destroy ourselves before then, computer science will eventually produce ASI.    That does not mean that the ASI would be out of control, but if ASI technology exists then it is possible to remove the controls (what is secure can be hacked).   Expect ASI to be a highly secured resource (like a nuclear plant) rather than armies of super intelligent robots or a sci-fi entity that crawls through cyberspace infecting hardware throughout.

I do not expect that any of us will live to see ASI.   We are very far away from getting close to human-level intelligence and currently nobody knows how that would even be accomplished.   Modern AI is largely still brute force (albeit extremely sophisticated and clever).   Machine learning is indeed simply an algorithm that is designed to iterate over incomprehensible amounts of data and slowly detect patterns that seem to meet the objectives it was given.   ASI would need to go well beyond sophisticated pattern detection into the kind of thought we call 'critical thinking', 'judgement', 'intuition', 'insight', etc. and do so dynamically (without being pre-programmed or even directed/tuned as we see with modern AI).   No worries, it is unlikely any of us will live long enough to see that.

If experts are correct, an ASI could make its debut in our lifetimes, or the lifetimes of our children. 

Who are these experts, what did they actually say, and how did they back up what they said?

 
 
 
Tacos!
Professor Guide
5  Tacos!    2 years ago

The AI would also have to have the tools to take action. Think about something like the movie War Games. I can see creating AI to monitor the global situation and national security. But why would you make autonomous AI with the power to launch missiles? I know there is an in-story explanation, but it’s really not necessary. Why would you also give it the power to resist reprogramming or the shutting down of its systems? Same with the Skynet idea. 

Now if you’re going to go create AI and give it the tools to do everything necessary to destroy the human race, then yeah, I guess we’re screwed.

 
 
 
Thomas
Masters Guide
6  Thomas    2 years ago

We already know that the answer to life, the universe and everything is 42 thanks to Douglas Adams.... 

To bring about this calamitous and dystopian future ASI, the "mind" would necessarily need a "body." Whoops, power supply went down. Damn, tripped a breaker. Hey,could you please scratch under my RAM?

 
 

Who is online



devangelical


456 visitors