╌>

The tech watchdog that raised alarms about social media is warning about AI

  

Category:  Health, Science & Technology

Via:  perrie-halpern  •  last year  •  1 comments

By:   Jason Abbruzzese

The tech watchdog that raised alarms about social media is warning about AI
One of tech's most vocal watchdogs has a warning about the explosion of advanced artificial intelligence: We need to slow down.

S E E D E D   C O N T E N T



One of tech's most vocal watchdogs has a warning about the explosion of advanced artificial intelligence: We need to slow down.

Tristan Harris and Aza Raskin, two of the co-founders of the Center for Humane Technology, discussed with "NBC Nightly News" anchor Lester Holt their concerns about the emergence of new forms of AI that have shown the ability to develop in unexpected ways.

AI can be a powerful tool, Harris said, as long as it's focused on particular tasks.

"What we want is AI that enriches our lives. AI that works for people, that works for human benefit that is helping us cure cancer, that is helping us find climate solutions," Harris said. "We can do that. We can have AI and research labs that's applied to specific applications that does advance those areas. But when we're in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that's not an equation that's going to end well."

Harris, who previously worked at Google as a design ethicist, has emerged in recent years as one of Big Tech's loudest and most pointed critics. Harris started the Center for Humane Technology with Raskin and Randima Fernando in 2018, and the group's work came to widespread attention for its involvement in the documentary "The Social Dilemma," which looked at the rise of social media and problems therein.

Harris and Raskin each emphasized that the AI programs recently launched — most notably OpenAI's ChatGPT — are a significant step beyond previous AI that was used to automate tasks like reading license plate numbers or searching for cancers in MRI scans.

These new AI programs are showing the ability to teach themselves new skills, Harris noted.

"What's surprising and what nobody foresaw is that just by learning to predict the next piece of text on the internet, these models are developing new capabilities that no one expected," Harris said. "So just by learning to predict the next character on the internet, it's learned how to play chess."

Raskin also emphasized that some AI programs are now doing unexpected things.

"What's very surprising about these new technologies is that they have emergent capabilities that nobody asked for," Raskin said.

AI programs have been developed for decades, but the introduction of large language models, often shortened to LLMs, has sparked renewed interest in the technology. LLMs like GPT-4, the newest iteration of the AI that underpins ChatGPT, are trained on massive amounts of data, most of it from the internet.

At their simplest level, these AI programs work by generating text to a particular prompt based on statistical probabilities, coming up with one word at a time and then trying to again predict the most likely word to come next based on its training.

That has meant LLMs can often repeat false information or even come up with their own, something Raskin characterizes as hallucinations.

"One of the biggest problems with AI right now is that it hallucinates, that it speaks very confidently about any topic and it's not clear when it is getting it right and when it is getting it wrong," Raskin said.

Harris and Raskin also warned that these newer AI systems have the capability to cause disruption well beyond the internet. A recent study conducted by OpenAI and the University of Pennsylvania found that about 80% of the U.S. workforce could have 10% of their work tasks affected by modern AI. Almost one-fifth of workers could see half their work tasks affected.

"The influence spans all wage levels, with higher-income jobs potentially facing greater exposure," the researchers wrote.

Harris said that societies have long adapted to new technologies, but many of those changes happened over decades. He warned that AI could change things quickly, which is cause for concern.

"If that change comes too fast, then society kind of gets destabilized," Harris said. "So we're again in this moment where we need to consciously adapt our institutions and our jobs for a post AI world."

Many leading voices in the AI industry, including OpenAI CEO Sam Altman, have called for the government to step in and come up with regulation, telling ABC News even he and others at his company are "a little bit scared" of the technology and its advancements.

There have been some preliminary moves by the U.S. government around AI, including an "AI Bill of Rights" from the White House released in October and a bill put forward by Rep. Ted Lieu, D-Calif., to regulate AI (the bill was written using ChatGPT).

Harris stressed that there are currently no effective limitations on AI.

"No one is building the guardrails," Harris said. "And this has moved so much faster than our government has been able to understand or appreciate."


Tags

jrDiscussion - desc
[]
 
CB
Professor Principal
1  CB    last year

It can't be slowed down. As one commentator on a news show pointed out recently. If the U.S. or the West does not utilize the brilliance of the mind to create - the East will do it. That is, where ever light is seen to shine. . . darkness is its equal and opposite constituent. AI can not be slowed, because 'all the world' will never ever agree to slow down. Thus, no one can safely take the risk of lagging behind.

 
 

Who is online











486 visitors