╌>

Artificial intelligence pioneer leaves Google and warns about technology's future

  

Category:  News & Politics

Via:  perrie-halpern  •  last year  •  21 comments

By:   Brahmjot Kaur

Artificial intelligence pioneer leaves Google and warns about technology's future
The "godfather of AI" is issuing a warning about the technology he helped create.

S E E D E D   C O N T E N T


The "godfather of AI" is issuing a warning about the technology he helped create.

Geoffrey Hinton, a trailblazer in artificial intelligence, has joined the growing list of experts sharing their concerns about the rapid advancement of artificial intelligence. The renowned computer scientist recently left his job at Google to speak openly about his worries about the technology and where he sees it going.

"It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said in an interview with The New York Times.

Hinton is worried that future versions of the technology pose a real threat to humanity.

"The idea that this stuff could actually get smarter than people — a few people believed that," he said in the interview. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

230501-Geoffrey-Hinton-al-1228-2c82cf.jpg Geoffrey Hinton at Google's Mountain View, Calif., headquarters, in 2015. Noah Berger / AP file

Hinton, 75, is most noted for the rapid development of deep learning, which uses mathematical structures called neural networks to pull patterns from massive sets of data.

Like other experts, he believes the race between Big Tech to develop more powerful AI will only escalate into a global race.

Hinton tweeted Monday morning that he felt Google had acted responsibly in its development of AI, but that he had to leave the company to speak out.

Jeff Dean, senior vice president of Google Research and AI, said in an emailed statement: "Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I've deeply enjoyed our many conversations over the years. I'll miss him, and I wish him well! As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We're continually learning to understand emerging risks while also innovating boldly."

Hinton is a notable addition to a group of technologists that have been speaking out publicly about the unbridled development and release of AI.

Tristan Harris and Aza Raskin, the co-founders of the Center for Humane Technology, spoke with "Nightly News" host Lester Holt in March about their own concerns around AI.

"What we want is AI that enriches our lives. AI that works for people, that works for human benefit that is helping us cure cancer, that is helping us find climate solutions," Harris said during the interview. "We can do that. We can have AI and research labs that's applied to specific applications that does advance those areas. But when we're in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that's not an equation that's going to end well."

AI 'race to recklessness' could have dire consequences, tech experts warn


An open letter from the Association for the Advancement of Artificial Intelligence, which was signed by 19 current and former leaders of academic society, was released last month warning the public of the risks around AI and the need for collaboration to mitigate some of those concerns.

"We believe that AI will be increasingly game-changing in healthcare, climate, education, engineering, and many other fields," the letter said. "At the same time, we are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs."

Hinton, along with scientists Yoshua Bengio and Yann LeCun, won the Turing Award in 2019, known as the tech industry's version of the Nobel Prize, for their advancements in AI.

Hinton, Bengio and LeCun were open about their concerns with AI but were optimistic about the potential of the technology, including detecting health risks earlier than doctors and more accurate weather warnings about earthquakes and floods.

"One thing is very clear, the techniques that we developed can be used for an enormous amount of good affecting hundreds of millions of people," Hinton previously told The Associated Press.


Tags

jrDiscussion - desc
[]
 
Vic Eldred
Professor Principal
1  Vic Eldred    last year

Great article.

The obvious problem will be who controls it and programs it.

Btw unlike previous tech advances this one threatens to replace a lot of our professional class rather than service sector workers.

 
 
 
TᵢG
Professor Principal
1.1  TᵢG  replied to  Vic Eldred @1    last year
Btw unlike previous tech advances this one threatens to replace a lot of our professional class rather than service sector workers.

That threat is real and basically unstoppable.   Lower level professional jobs are actively being replaced with cyber workers who learn the job by observing the workers and then, when trained, take over.   This is called Robotic Process Automation.   It is not malicious, not illegal, and basically it is simply a new form of automation like all others.   But it absolutely will replace / modify extant professional jobs.

The difference between the current AI-based automation and the past automation (e.g. use of robots and advanced machinery in factories) is that many of these professional jobs will go away and not open new opportunities.   The RBA is essentially cleaning up wasteful processes.   Thus we are almost certainly going to see a shrinking in available jobs.   The workforce is also shrinking so that will absorb some of the change, but I think we have a very serious problem going forward worldwide (especially in advanced nations like ours) where unemployment could greatly increase.

 
 
 
JohnRussell
Professor Principal
1.1.1  JohnRussell  replied to  TᵢG @1.1    last year

I read something once by an expert in this field who said unemployment in America could be as high as 70% by the end of this century. 

 
 
 
TᵢG
Professor Principal
1.1.2  TᵢG  replied to  JohnRussell @1.1.1    last year

It is a problem that we need to take seriously.   I do not know what the actual number might be, but the trend of substantially higher unemployment seems inevitable.

 
 
 
JohnRussell
Professor Principal
2  JohnRussell    last year

7 or 8 years ago someone posted a video here which made the argument (and I thought persuasively) that robotics or AI would come for everyone's job, and eventually there would be a future where almost no one works. There are two ways to look at this possibilty   - the bright side is that people would be free to pursue their passions without having to trudge to make a living, but the dark side is wealth will be concentrated in fewer and fewer and fewer hands, with a huge part of the population dependent on "welfare". 

How far AI and robotics go into taking over our every day lives, for bad and good, will depend on who is making all the money off it and how much control they have of the government. Sometime in the relatively near future this will become the biggest issue in the developed world though. 

 
 
 
Nerm_L
Professor Expert
3  Nerm_L    last year

Fossil fuels were used for an enormous amount of good affecting billions of people.  We just ignored the consequences of obtaining that good.  So, proponents of artificial intelligence seem to be following the same path and making the same mistakes.

If artificial intelligence is so powerful then why can't artificial intelligence be used to forecast consequences, develop guidelines for use, and impose regulations on itself?  That's been the impetus for creating artificial intelligence, to solve complex problems.  According to what has been presented, artificial intelligence has become the tool to address the problems associated with artificial intelligence.   

Seems as though the ultimate motivation for creating artificial intelligence has really been exploitation rather than doing good.  Artificial intelligence is following the fossil fuel model.

 
 
 
TᵢG
Professor Principal
3.1  TᵢG  replied to  Nerm_L @3    last year
If artificial intelligence is so powerful then why can't artificial intelligence be used to forecast consequences, develop guidelines for use, and impose regulations on itself? 

Because the AI you have envisioned does not exist (yet).   I doubt any of us will see such advanced artificial cognition in our lifetimes.

Current AI is awesome in recognizing complex patterns from an enormous corpus of data and using that to learn behavior.   It is not, in any way, able to engage in highly cognitive activities per your example.   And any AI that appears to have high cognitive abilities is simply a convincing facade.    For example, the AI that can beat the best Go players in the world has zero high cognitive abilities.   It is, in effect, an impressively complex multivariate, nonlinear equation that has learned the best Go move methods by playing countless millions of games with itself.

 
 
 
Nerm_L
Professor Expert
3.1.1  Nerm_L  replied to  TᵢG @3.1    last year
Because the AI you have envisioned does not exist (yet).   I doubt any of us will see such advanced artificial cognition in our lifetimes.

I disagree.  At its present state of development, artificial intelligence creates its own capabilities through 'deep learning' (which, as I understand it, is a correlative process).  Artificial intelligence currently lacks independence (or cognitive ability, as you put it) thus can only develop capabilities from information it is provided.  Limiting and arbitrarily selecting the sources of information used for 'deep learning' allows exploitation.

So, artificial intelligence at its current state should be quite capable of forecasting and developing guidelines for any complex system, including itself.  But artificial intelligence is not being used to address itself.  The motivation for exploitation circumvents the power of the tool to forecast and avoid potential problems associated with artificial intelligence.

 
 
 
TᵢG
Professor Principal
3.1.2  TᵢG  replied to  Nerm_L @3.1.1    last year
I disagree. 

What a surprise.

So, artificial intelligence at its current state should be quite capable of forecasting and developing guidelines for any complex system, including itself.

Well, Nerm, I think the only way you will understand the state of the art is to spend some very serious time (as I have) trying to learn exactly how deep learning is achieved.   It is incredibly impressive, but is still ultimately pattern recognition and that is simply nowhere close to the kind of thinking you are envisioning.

So disagree to your heart's content.   But note that you are doing so out of ignorance.

 
 
 
Nerm_L
Professor Expert
3.1.3  Nerm_L  replied to  TᵢG @3.1.2    last year
Well, Nerm, I think the only way you will understand the state of the art is to spend some very serious time (as I have) trying to learn exactly how deep learning is achieved.   It is incredibly impressive, but is still ultimately pattern recognition and that is simply nowhere close to the kind of thinking you are envisioning.

Yes, as I pointed out, I understand 'deep learning' is a correlative process.  What you don't seem to grasp is that policy development utilizes a process very much like 'deep learning'.  

So disagree to your heart's content.   But note that you are doing so out of ignorance.

A simple google search suggests that artificial intelligence is already moving into policy making.  Ready or not, here it comes.  Perhaps I have a better understanding of how these technologies are exploited.

 
 
 
TᵢG
Professor Principal
3.1.4  TᵢG  replied to  Nerm_L @3.1.3    last year
A simple google search suggests that artificial intelligence is already moving into policy making. 

Nerm, your link is to a system that does this:

Konfer automates mapping of your AI systems so all stakeholders can easily determine and evaluate the various facets of AI that can generate business risks and manage their business metrics with confidence.

Not even remotely close to the human skill of creating policy.

Good grief man.

 
 
 
Nerm_L
Professor Expert
3.1.5  Nerm_L  replied to  TᵢG @3.1.4    last year
Nerm, your link is to a system that does this:

My link is to a google search.  The results vary each time you click the link.  I'm linking to a body of knowledge; not to any specific piece of knowledge.  So, your comment really doesn't address the link.

The google search shows increasing interest for using artificial intelligence in policy making at the current state of development.  That shouldn't be surprising.  Ready or not, artificial intelligence is going to be a tool for policy making in the near term; likely less than a decade.

 
 
 
TᵢG
Professor Principal
3.1.6  TᵢG  replied to  Nerm_L @3.1.5    last year
I'm linking to a body of knowledge; not to any specific piece of knowledge.

You are simply engaging in bullshit.   Not interested.

 
 
 
Nerm_L
Professor Expert
3.1.7  Nerm_L  replied to  TᵢG @3.1.6    last year
You are simply engaging in bullshit.   Not interested.

How is that comment different than the reality of what happened with adoption of fossil fuels?  People weren't interested in the consequences of using fossil fuels because of all the good that was provided.  People could use fossil fuels in the now because future advances and developments would avoid consequences.  And then it was too late.

 
 
 
TᵢG
Professor Principal
3.1.8  TᵢG  replied to  Nerm_L @3.1.7    last year
How is that comment different than the reality of what happened with adoption of fossil fuels? 

Which comment are you referring to?

 
 
 
Nerm_L
Professor Expert
3.1.9  Nerm_L  replied to  TᵢG @3.1.8    last year
Which comment are you referring to?

The comment I quoted.  

 
 
 
TᵢG
Professor Principal
4  TᵢG    last year

The 'problem' with AI is the problem of all technology (not just cyber technology).   That problem is misuse.   We definitely need to learn how AI can be misused and effect legislation to mitigate this.

AI is no different from any other powerful tool.   It can be used for great good and great harm.   Nothing new here.

One more thing, AI is NOT going to magically become self-aware and control the internet and take over mankind.   At least not in our lifetimes.

 
 
 
evilone
Professor Guide
4.1  evilone  replied to  TᵢG @4    last year
The 'problem' with AI is the problem of all technology (not just cyber technology).   That problem is misuse. 

We are seeing that now with photography, video and audio. This is an area that will be more problematic sooner rather than later. The other is when people don't know they are using an AI. The current browser AIs like ChatGPT lie. They are created to predict the next word, not retrieve arcuate information. If one doesn't know where their information comes from and assumes it's correct...

One more thing, AI is NOT going to magically become self-aware and control the internet and take over mankind.

No, not yet. 

At least not in our lifetimes.

There are no bets on this. It's as likely to happen as not happen if multiple companies are playing around with AI learning that runs at petaflops.

 
 
 
TᵢG
Professor Principal
4.1.1  TᵢG  replied to  evilone @4.1    last year
There are no bets on this.

The current state of AI has no clue on how to make an Artificial General Intelligence.   It has no clue on how to even create an artificial consciousness.   Given we are at square 0 in these areas, it is a very good bet that we will not ever see this in our lifetimes.

 
 
 
evilone
Professor Guide
4.1.2  evilone  replied to  TᵢG @4.1.1    last year
The current state of AI has no clue on how to make an Artificial General Intelligence.   It has no clue on how to even create an artificial consciousness.

True. 

Given we are at square 0 in these areas, it is a very good bet that we will not ever see this in our lifetimes.

I'm only saying that we can go from square 0 to 11 in a very short span of time. It will most likely be by accident if it does happen. 

Again I'm not saying it's likely. 

 
 
 
TᵢG
Professor Principal
4.1.3  TᵢG  replied to  evilone @4.1.2    last year
I'm only saying that we can go from square 0 to 11 in a very short span of time.

I very, very much doubt that.   The problem of AGI is profound.   We are not, IMO, going to accidently solve it.   I envision an incremental journey that progressively increases the perceived intelligence to the point where we have an AGI.   I, however, do not expect to ever see it in my lifetime.  

 
 

Who is online




412 visitors