╌>

Leading experts warn of a risk of extinction from AI

  
Via:  Buzz of the Orient  •  11 months ago  •  18 comments

By:   Vanessa Romo mpr

Leading experts warn of a risk of extinction from AI
 

Leave a comment to auto-join group Great NON-POLITICAL Articles

Great NON-POLITICAL Articles


S E E D E D   C O N T E N T


Leading experts warn of a risk of extinction from AI

800



The welcome screen for the OpenAI ChatGPT app is displayed on a laptop screen in February in London.    Leon Neal/Getty Images

AI experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us   and it is time to impose limits to ensure they don't take control over humans or destroy the world.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a group of scientists and tech industry leaders said in a  statement  that was posted on the Center for AI Safety's website.

Sam Altman, CEO of OpenAI, the Microsoft-backed AI research lab that is behind ChatGPT, and the so-called godfather of AI who recently left Google, Geoffrey Hinton, were among the hundreds of leading figures who signed the we're-on-the-brink-of-crisis statement.

The call for guardrails on AI systems has intensified in recent months as public and profit-driven enterprises are embracing new generations of programs.

In a separate statement published in March and now signed by more than 30,000 people, tech executives and researchers called for a six-month pause on training of AI systems more powerful than GPT-4, the latest version of the ChatGPT chatbot.

An  open letter  warned: "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."

In a recent  interview with NPR , Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Dan Hendrycks, director of the Center for AI Safety, noted in a  Twitter thread  that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."

He added that society should endeavor to address all of the risks posed by AI simultaneously. "Societies can manage multiple risks at once; it's not 'either/or' but 'yes/and.' " he said. "From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well."

NPR's Bobby Allyn contributed to this story.



Tags

jrGroupDiscuss - desc
[]
 
Buzz of the Orient
Professor Expert
1  seeder  Buzz of the Orient    11 months ago

Comments are subject to this group's RED BOX RULES which can be accessed by clicking on this link -> or by clicking on this group's avatar at the top right of the article page above, either of which will open this group's home page.

 
 
 
Buzz of the Orient
Professor Expert
2  seeder  Buzz of the Orient    11 months ago

You need not be a science fiction fan to be afraid of what a rogue programmer or scientist could could create and the result of it.  We would have to have been born yesterday to think that "regulations" are going to prevent a Frankenstein from creating a monster. 

 
 
 
GregTx
PhD Guide
2.1  GregTx  replied to  Buzz of the Orient @2    11 months ago

Or rogue nation..

 
 
 
Buzz of the Orient
Professor Expert
2.1.1  seeder  Buzz of the Orient  replied to  GregTx @2.1    11 months ago

Quite true, but it isn't going to require a nation, any individual anywhere in the world could do it.  After all, has the USA been able to stop the gun violence?

 
 
 
TᵢG
Professor Principal
3  TᵢG    11 months ago
In a recent  interview with NPR , Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated. "I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Utter nonsense.  This guy has an ulterior motive.   He knows better than to suggest that artificial general intelligence superior to human intelligence is a mere five years away.   We do not have any idea how our conscious minds work.  Current AI is very impressive but is still mechanical and based heavily on human beings designing and tuning the model.

Five years away is clearly fear mongering.   I just do not know what motivates Hinton to fear monger.

 
 
 
Ronin2
Professor Quiet
3.1  Ronin2  replied to  TᵢG @3    11 months ago

If hackers can make a virus that self replicates and spreads; then AI could become smart enough to do the same. The programming language isn't even that difficult. It doesn't need to be smarter than a human; it just needs to be everywhere that uses electronics.

Imagine if a world wide AI simply decided to shut everything down. Nothing more than that. The sheer amount of chaos that would cause. No electricity; no heating; no communications; no transportation network. People loot just for the hell of it now. Imagine no working security systems; no way to make sure police, fire, and medical personnel get to where they are needed. How long would it take for humans to end themselves?

 
 
 
evilone
Professor Guide
3.1.1  evilone  replied to  Ronin2 @3.1    11 months ago
How long would it take for humans to end themselves?

It would only take days for people to adapt. Things might go tribal in areas, but we are harder to kill than cockroaches. 

 
 
 
TᵢG
Professor Principal
3.1.2  TᵢG  replied to  Ronin2 @3.1    11 months ago
If hackers can make a virus that self replicates and spreads; then AI could become smart enough to do the same.

No.   The AI would need to be built with the capability.  In that case, it is no different from any other software.   That is, we can today build software that will (based on conditions) insert a virus.   The thinking, however, is human ... not artificial.   AI currently does not have the capability to understand the real world and to creatively develop a virus with malicious intent.    Nowhere close.   Not a chance.

Imagine if a world wide AI simply decided to shut everything down.

Easy to imagine, but it is science fiction.   This is something that could happen given a sufficiently uncontrolled artificial general intelligence but this is nowhere close to the capabilities today.     

 
 
 
bccrane
Freshman Silent
3.2  bccrane  replied to  TᵢG @3    11 months ago
I just do not know what motivates Hinton to fear monger.

Making these claims gets him money making appearances, he is still relevant, and does he happen to have any upcoming book deals?

Also is he interested in being named the head of a governmental program to regulate that which he helped to create?  Wow, think of the money making possibilities, he and any heirs would be set for life.

 
 
 
evilone
Professor Guide
4  evilone    11 months ago

So what are these people talking about?

Artificial Intelligence (AI) as we currently know it is not actually Intelligent. Current AI like CHATgpt is programmed to predict what the next word might be in a search request. It is not programmed to find what someone searches for and therefor largely makes stuff up. What current AI is good for is coding and digital (photo, video and audio) manipulation. Writers, photographers and coders are currently worried about AI intrusions into the work space and trying to take legal steps to mitigate it. Companies like Apple, Google and Microsoft are working on AI programs that can solve complex problems. They have the baby, but it isn't even at the crawling stage of development yet.

What AI cannot do is think. First we don't understand conscientiousness, thus it would be difficult, if not impossible, for anyone to launch a program that would result in a fully sentient AI. Second there is no infrastructure for it work in. No one is actually trying to program a sentient AI. Not even rouge actors. It's counter productive.

In the strictest sense of logic there is a possibility, but the probability is so close to zero as to be discarded. There is no logical reason to conclude sentient AI is anywhere close to being a reality in our lifetimes, if ever. 

So again what are these people talking about?

AI is simple word for Large Language Model based on Neural Networks based on what we know of human brains. These eat through so much data so fast it "learns" at an exponential rate of speed. The people saying we are 5, 10, 50 years away are working on the theory that infinitesimal probability I mentioned above also grows exponentially and spontaneous sentience will appear. They say it and it makes for great clicks, but there is no data to support a theory for something that has never happened, isn't being actively worked on and doesn't have the infrastructure to "live". 

We have a larger chance of being destroyed by an rouge asteroid than a rouge AI. We have a larger chance of being wiped out by a rouge virus than a rogue AI. We have a greater change of being wiped out by WW3 nuclear destruction than a rouge AI.

 
 
 
TᵢG
Professor Principal
4.1  TᵢG  replied to  evilone @4    11 months ago
There is no logical reason to conclude sentient AI is anywhere close to being a reality in our lifetimes, if ever. 

Exactly.   Except for "if ever" since I certainly see Artificial General Intelligence with a model of reality to be a possibility albeit well into the future.

 
 
 
evilone
Professor Guide
4.1.1  evilone  replied to  TᵢG @4.1    11 months ago
 Except for "if ever"..

All things considered at present the concept of sentient AI is more Schrodinger's cat than anything else. I do think the possibility can be greater in some distant future IF current trends continue. One concept that was being worked on (I don't know the current status) were personal assistant AIs. Everyone would have their own personal AI assistent to keep track of and organize their lives. It would use facial recognition to remember who contacts are too. 

One of the concepts bandied about with sentient AI is the concept of a sentient of pure logic. Would something sentient contain pure logic or would it also develop emotions? Fear, empathy, love? More philosophical than scientific.

 
 
 
TᵢG
Professor Principal
4.1.2  TᵢG  replied to  evilone @4.1.1    11 months ago

Yes, we are very much in agreement.  The difference, if any, is that I do not see human equivalent intelligence as a hard limit.   I will never see this manifest, but I see no reason why Artificial General Intelligence cannot be achieved in the future and indeed surpass the capabilities of human beings.   

Our brains are incredibly complex (and mostly a mystery to us as to how the mind actually works) but ultimately our brains are bio machines.

 
 
 
evilone
Professor Guide
4.1.3  evilone  replied to  TᵢG @4.1.2    11 months ago

Here is an article on neural network building - 

We’re often reminded of the fact that biological evolution is guided entirely by random mutations in our genetic code. We were not designed to have intelligence — it just sort of  happened . There was no path to follow, and yet we still ended up right where we are today.
 
 
 
Buzz of the Orient
Professor Expert
5  seeder  Buzz of the Orient    11 months ago

Maybe AI will not, at least for the foreseeable future, be the cause of universal destruction, but I believe it already has the capability of causing mass damage and chaos.  If humans are already capable of causing disruptions as they already have done, not much is needed to expand such a situation.

 
 
 
TᵢG
Professor Principal
5.1  TᵢG  replied to  Buzz of the Orient @5    11 months ago
but I believe it already has the capability of causing mass damage and chaos

AI is currently a set of algorithms.   These are sophisticated and impressive, but they are simply algorithms.   Conventional software has far more capability of causing disruptions than AI.   The malicious acts of software (AI or otherwise) stems from human beings.   

We should be concerned about the power of software today given the incredible power of extant hardware (storage, processing and communications).     Focusing on simply the AI subset of algorithms is misguided (and wasteful).

 
 
 
Buzz of the Orient
Professor Expert
6  seeder  Buzz of the Orient    11 months ago

I have to confess something.  I have no idea what algorithms are, and to me clouds are something that Joni Mitchell sings about.  I haven't a clue about bitcoins and I don't want to know.  I think my father had it right and now I follow his advice that if you don't have the cash in your pocket you can't afford it.  To me the word woke is a word used in the expression "I woke up" and I'm not going to stop saying he and him, she and her even if it's frowned upon.  I'm convinced that the only reason the insane are locked up and we're not is because we're the majority. 

 
 
 
TᵢG
Professor Principal
6.1  TᵢG  replied to  Buzz of the Orient @6    11 months ago

An algorithm is, in essence, a detailed procedure for accomplishing some task.   Algorithms are the essence of computer programs.  They are akin to engineering specifications.   An engineering spec when reified is some physical object.   A reified algorithm is often a computer program (software).

Anyway, the point is that the state of the art of AI today is impressive but is still merely at the tool level.   Human beings are still doing all the heavy lifting with AI  and we have no clue how to make a sentient intelligence (an Artificial General Intelligence).    We can make some extremely sophisticated behavior and it can do great harm, but that is not a problem with AI but rather the problem of technology growing increasingly powerful and accessible to smaller groups of people.   Much like weapons continue to grow in sophistication and become more accessible to individuals (and not just to states). 

So instead of fear-mongering AI (a class of software algorithms), these experts should be warning about the need to control access to the increasingly powerful computer technology (the hardware).

 
 

Who is online

JBB
Sparty On


54 visitors