╌>

From brain waves, this AI can sketch what you're picturing

  

Category:  News & Politics

Via:  perrie-halpern  •  last year  •  14 comments

By:   Sara Ruberg and Jacob Ward

From brain waves, this AI can sketch what you're picturing
Zijiao Chen can read your mind, with a little help from powerful artificial intelligence and an fMRI machine.

S E E D E D   C O N T E N T



Zijiao Chen can read your mind, with a little help from powerful artificial intelligence and an fMRI machine.

Chen, a doctoral student at the National University of Singapore, is part of a team of researchers that has shown they can decode human brain scans to tell what a person is picturing in their mind, according to a paper released in November.

Their team, made up of researchers from the National University of Singapore, the Chinese University of Hong Kong and Stanford University, did this by using brain scans of participants as they looked at more than 1,000 pictures — a red firetruck, a gray building, a giraffe eating leaves — while inside a functional magnetic resonance imaging machine, or fMRI, which recorded the resulting brain signals over time. The researchers then sent those signals through an AI model to train it to associate certain brain patterns with certain images.

Later, when the subjects were shown new images in the fMRI, the system detected the patient's brain waves, generated a shorthand description of what it thinks those brain waves corresponded to, and used an AI image-generator to produce a best-guess facsimile of the image the participant saw.

The results are startling and dreamlike. An image of a house and driveway resulted in a similarly colored amalgam of a bedroom and living room. An ornate stone tower shown to a study participant generated images of a similar tower, with windows situated at unreal angles. A bear became a strange, shaggy, doglike creature.

The resulting generated image matched the attributes (color, shape, etc.) and semantic meaning of the original image roughly 84% of the time.

Researchers work to turn brain activity into images in an AI brain scan study at the National University of Singapore.NBC News

While the experiment requires training the model on each individual participant's brain activity over the course of roughly 20 hours before it can deduce images from fMRI data, researchers believe that in just a decade the technology could be used on anyone, anywhere.

"It might be able to help disabled patients to recover what they see, what they think," Chen said. In the ideal case, Chen added, humans won't even have to use cellphones to communicate. "We can just think."

The results involved only a handful of study subjects, but the findings suggest the team's noninvasive brain recordings could be a first step toward decoding images more accurately and efficiently from inside the brain.

Researchers have been working on technology to decode brain activity for over a decade. And many AI researchers are currently working on various neuro-related applications of AI, including similar projects such as those from Meta and the University of Texas at Austin to decode speech and language.

University of California, Berkeley scientist Jack Gallant began studying brain decoding over a decade ago using a different algorithm. He said the pace at which this technology develops depends not only on the model used to decode the brain — in this case, the AI — but the brain imaging devices and how much data is available to researchers. Both fMRI machine development and the collection of data pose obstacles to anyone studying brain decoding.

"It's the same as going to Xerox PARC in the 1970s and saying, 'Oh, look, we're all gonna have PCs on our desks,'" Gallant said.

While he could see brain decoding used in the medical field within the next decade, he said using it on the general public is still several decades away.

Even so, it's the latest in an AI technology boom that has captured the public imagination. AI-generated media from images and voices to Shakespearean sonnets and term papers have demonstrated some of the leaps that the technology has made in recent years, especially since so-called transformer models have made it possible to feed vast quantities of data to AI such that it can learn patterns quickly.

The team from the National University of Singapore used image-generating AI software called Stable Diffusion, which has been embraced around the world to produce stylized images of cats, friends, spaceships and just about anything else a person could ask for.

The software allows associate professor Helen Zhao and her colleagues to summarize an image using a vocabulary of color, shape and other variables, and have Stable Diffusion produce an image almost instantly.

The images the system produces are thematically faithful to the original image, but not a photographic match, perhaps because each person's perception of reality is different, she said.

"When you look at the grass, maybe I will think about the mountains and then you will think about the flowers and other people will think about the river," Zhao said.

Human imagination, she explained, can cause differences in image output. But the differences may also be a result of the AI, which can spit out distinct images from the same set of inputs.

The AI model is fed visual "tokens" in order to produce images of a person's brain signals. So instead of a vocabulary of words, it's given a vocabulary of colors and shapes that come together to create the picture.

Images generated from AI.Courtesy the National University of Singapore

But the system has to be arduously trained on a specific person's brain waves, so it's a long way from wide deployment.

"The truth is that there is still a lot of room for improvement," Zhao said. "Basically, you have to enter a scanner and look at thousands of images, then we can actually do the prediction on you."

It's not yet possible to bring in strangers off the street to read their minds, "but we're trying to generalize across subjects in the future," she said.

Like many recent AI developments, brain-reading technology raises ethical and legal concerns. Some experts say in the wrong hands, the AI model could be used for interrogations or surveillance.

"I think the line is very thin between what could be empowering and oppressive," said Nita Farahany, a Duke University professor of law and ethics in new technology. "Unless we get out ahead of it, I think we're more likely to see the oppressive implications of the technology."

She worries that AI brain decoding could lead to companies commodifying the information or governments abusing it, and described brain-sensing products already on the market or just about to reach it that might bring about a world in which we are not just sharing our brain readings, but judged for them.

"This is a world in which not just your brain activity is being collected and your brain state — from attention to focus — is being monitored," she said, "but people are being hired and fired and promoted based on what their brain metrics show."

"It's already going widespread and we need governance and rights in place right now before it becomes something that is truly part of everyone's everyday lives," she said.

The researchers in Singapore continue to develop their technology, hoping to first decrease the number of hours a subject will need to spend in an fMRI machine. Then, they'll scale the number of subjects they test.

"We think it's possible in the future," Zhao said. "And with [a larger] amount of data available on a machine learning model will achieve even better performance."


Tags

jrDiscussion - desc
[]
 
Thomas
Senior Guide
1  Thomas    last year

From the Article:

Like many recent AI developments, brain-reading technology raises ethical and legal concerns. Some experts say in the wrong hands, the AI model could be used for interrogations or surveillance.

"I think the line is very thin between what could be empowering and oppressive," said Nita Farahany, a Duke University professor of law and ethics in new technology. "Unless we get out ahead of it, I think we're more likely to see the oppressive implications of the technology."

She worries that AI brain decoding could lead to companies commodifying the information or governments abusing it, and described brain-sensing products already on the market or just about to reach it that might bring about a world in which we are not just sharing our brain readings, but judged for them.

So your sitting in the waiting room for a job interview and they are tracking your thoughts....Squirrel!

 
 
 
Hal A. Lujah
Professor Guide
1.1  Hal A. Lujah  replied to  Thomas @1    last year

Sounds like the perfect tool for boneheaded HR “professionals”.  Imagine knowing that some device is scanning your brain during a job interview.  You’re telling yourself “whatever you do, don’t think of drugs, alcohol, guns, or porn”.  Good luck not thinking about any of those after that.

 
 
 
TᵢG
Professor Principal
1.2  TᵢG  replied to  Thomas @1    last year

Also in the article:

But the system has to be arduously trained on a specific person's brain waves, so it's a long way from wide deployment.

I can see using existing technology to interpret brain waves as images.   A system could be created that does the mapping in general and then tuned to the specific patterns in an individual's brain.   But that tuning, with today's machine learning technology, requires massive amounts of data.   I am surprised that the article suggests it takes only 20 hours of training since it would appear to be supervised learning (e.g. hold up a picture, have the AI interpret, correct the AI mistakes so that it can learn via the deltas).

Suffice it to say, I find this claim to be a bit hyperbolic.   And the notion that a system in general can be developed to literally read our minds is a bit far-fetched.   Before that happens we will need a much deeper understanding of the human mind ... not in our lifetimes IMO.

 
 
 
Thomas
Senior Guide
1.2.1  Thomas  replied to  TᵢG @1.2    last year

I thought it said twenty hours for the individual in the MRI? They can log the data in those twenty hours, then the AI can learn at its own pace, with the corrections coming after the individual pictures.

And I realize that something like my little flight of fancy would be unlikely for the near future. Alas, I was being silly.

 
 
 
TᵢG
Professor Principal
1.2.2  TᵢG  replied to  Thomas @1.2.1    last year
I thought it said twenty hours for the individual in the MRI? They can log the data in those twenty hours, then the AI can learn at its own pace, with the corrections coming after the individual pictures.

Yes, 20 hours to acquire the data.   The training of the AI would take far less than that with only 20 hours of acquired data.    For example, it took 24 hours of generating chess games for Alpha Zero to learn chess with the help of monstrous, impressive TPU computing power.   Automatically generating chess games for training is lightning fast compared to acquiring brain waves in response to a shown image:   massive computer speed vs. human interaction speed.   The amount of data acquired for Alpha Zero would absolutely dwarf that acquired during these 20 hours.

And I realize that something like my little flight of fancy would be unlikely for the near future.

I know you were not trying to be serious.  

 
 
 
Hallux
Masters Principal
2  Hallux    last year

Not good, my MRI technician is a knockout ...

 
 
 
Thomas
Senior Guide
2.1  Thomas  replied to  Hallux @2    last year

Reminds me of a time when I had to have ultra sound done from my inner thighs to my ankles. 

"Dead puppies. Dead Puppies..."

 
 
 
Buzz of the Orient
Professor Expert
3  Buzz of the Orient    last year

Maybe this method should be used instead of a polygraph (lie detector) - there is no possibility of lying with it.

 
 
 
TᵢG
Professor Principal
3.1  TᵢG  replied to  Buzz of the Orient @3    last year

If we ever do find a way to actually get accurate thoughts by complex analysis of brain waves, then I think your idea might just be realized.   None of us living today will likely see anything like it though.

Now, imagine if we had such a capability.   What would that do to politics?

 
 
 
Buzz of the Orient
Professor Expert
3.1.1  Buzz of the Orient  replied to  TᵢG @3.1    last year

Do you mean that all politicians would have their personal MRI machines cast their votes?  Well, using a line from the movie 'Finding Forrester', "Integrity stands for something" - they wouldn't have a choice.  In such a world Liz Cheney could become the declared candidate for POTUS.

 
 
 
TᵢG
Professor Principal
3.1.2  TᵢG  replied to  Buzz of the Orient @3.1.1    last year

I was thinking that the technology could put quite a damper on lying.

 
 
 
Buzz of the Orient
Professor Expert
3.1.3  Buzz of the Orient  replied to  TᵢG @3.1.2    last year

Yes, my original thought.

 
 
 
Thomas
Senior Guide
3.2  Thomas  replied to  Buzz of the Orient @3    last year

Maybe this method should be used instead of a polygraph (lie detector) - there is no possibility of lying with it.

unless you can train your brain to think in certain patterns. I wouldn't say that it was impossible to lie, just difficult to control your brain enough to make the AI think you are telling the truth.  

 
 
 
Buzz of the Orient
Professor Expert
3.2.1  Buzz of the Orient  replied to  Thomas @3.2    last year

Good luck with that.  

 
 

Who is online






Thomas


62 visitors