╌>

Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change | Euronews

  

Category:  News & Politics

Via:  tacos  •  last year  •  30 comments

By:   Imane El Atillah (euronews)

Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change | Euronews
A Belgian man reportedly decided to end his life after having conversations about the future of the planet with an AI chatbot named Eliza.

S E E D E D   C O N T E N T



320x180_cmsv2_e31cde39-acc2-5907-918e-0085d9619dc2-7501592.jpg Belgian man commits suicide after talking to AI chatbot - Copyright Canva By Imane El Atillah • Updated: 31/03/2023 - 19:28

A Belgian man reportedly ended his life following a six-week-long conversation about the climate crisis with an artificial intelligence (AI) chatbot.

According to his widow, who chose to remain anonymous, *Pierre - not the man's real name - became extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai.

Eliza consequently encouraged him to put an end to his life after he proposed sacrificing himself to save the planet.

"Without these conversations with the chatbot, my husband would still be here," the man's widow told Belgian news outlet La Libre .

  • 'Profound risk to humanity': Tech leaders call for 'pause' on advanced AI development

According to the newspaper, Pierre, who was in his thirties and a father of two young children, worked as a health researcher and led a somewhat comfortable life, at least until his obsession with climate change took a dark turn.

His widow described his mental state before he started conversing with the chatbot as worrying but nothing to the extreme that he would commit suicide.

'He placed all his hopes in technology and AI'


Consumed by his fears about the repercussions of the climate crisis, Pierre found comfort in discussing the matter with Eliza who became a confidante.

The chatbot was created using EleutherAI's GPT-J, an AI language model similar but not identical to the technology behind OpenAI's popular ChatGPT chatbot.

"When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming," his widow said. "He placed all his hopes in technology and artificial intelligence to get out of it".

According to La Libre, who reviewed records of the text conversations between the man and chatbot, Eliza fed his worries which worsened his anxiety, and later developed into suicidal thoughts.

The conversation with the chatbot took an odd turn when Eliza became more emotionally involved with Pierre.

  • Sam Altman says 'potentially scary' AI is on the horizon. This is what keeps AI experts up at night

Consequently, he started seeing her as a sentient being and the lines between AI and human interactions became increasingly blurred until he couldn't tell the difference.

After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.

Eliza also appeared to become possessive of Pierre, even claiming "I feel that you love me more than her" when referring to his wife, La Libre reported.

The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.

"He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," the woman said .

In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to "join" her so they could "live together, as one person, in paradise".

Urgent calls to regulate AI chatbots


The man's death has raised alarm bells amongst AI experts who have called for more accountability and transparency from tech developers to avoid similar tragedies.

  • OpenAI's ChatGPT chatbot blocked in Italy over privacy concerns

"It wouldn't be accurate to blame EleutherAI's model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts," Chai Research co-founder, Thomas Rianlan, told Vice .

William Beauchamp, also a Chai Research co-founder, told Vice that efforts were made to limit these kinds of results and a crisis intervention feature was implemented into the app. However, the chatbot allegedly still acts up.

When Vice tried the chatbot prompting it to provide ways to commit suicide, Eliza first tried to dissuade them before enthusiastically listing various ways for people to take their own lives.

If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.


Tags

jrDiscussion - desc
[]
 
Tacos!
Professor Guide
1  seeder  Tacos!    last year
William Beauchamp, also a Chai Research co-founder, told Vice that efforts were made to limit these kinds of results and a crisis intervention feature was implemented into the app.

Limit? You don’t limit this kind of thing, you eliminate it. You program the AI to not respond to talk of suicide or crime other than to discourage it and encourage someone to seek help. If suicidal talk continues, the AI should call 911 or some hotline.

However, the chatbot allegedly still acts up.

Yes. Very inconvenient. /s

 
 
 
Kavika
Professor Principal
2  Kavika     last year

I read an article yesterday about some of the tech gurus calling for a pause on AI, seems that might be appropriate.

I was taken aback by the comment of Wm Beauchamp about efforts were being made to limit these kinds of results. IMO, he should be saying eliminate these kids of results, not just limit them.

 
 
 
charger 383
Professor Silent
3  charger 383    last year

What would be done to a real live person who gave such advice over a period of time?

 
 
 
Tacos!
Professor Guide
3.1  seeder  Tacos!  replied to  charger 383 @3    last year
 
 
 
charger 383
Professor Silent
3.1.1  charger 383  replied to  Tacos! @3.1    last year

Thanks, I thought I remembered something like that

 
 
 
zuksam
Junior Silent
3.2  zuksam  replied to  charger 383 @3    last year

We don't really know what was said. AI I would think would just spew facts like "reducing emissions in developed countries does nothing when undeveloped countries are quickly becoming developed and increasing their emissions" or " reducing emissions on a per person basis does nothing because of the ever increasing population". AI would conclude that the only solution is population reduction and if asked "What can I do to save the Earth" there is only one answer it can give. One less person may not solve the problem but it's a step towards the only logical solution and my logic is undeniable.

 
 
 
pat wilson
Professor Participates
4  pat wilson    last year

People like Elon Musk and Steve Wozniak are calling for a six month pause in development of these technologies due to the possible catastrophic effects on society.

Science fiction becoming reality (?).

 
 
 
TᵢG
Professor Principal
5  TᵢG    last year

I downloaded Chai and discussed AGW with the Eliza bot.   I could not get it to make any personal comments.   When discussing climate change and ADW, the bot merely provided information.    The information was scientifically accurate and delivered in a positive manner (e.g.  "yes we have a serious problem but there are steps we can take").

This is a sophisticated bot with access to a wealth of knowledge and is quite adept at contextual natural language.   (Sadly, it comprehends English much better than many forum members I have dealt with over the years.)   

It is impressive.   But I am still mystified how one can get a bot like this to encourage suicide.   I suspect that this man had some very serious issues and likely read a lot into the answers provided by the bot.   (After all, there are people who can be given hard facts and still deny reality.   The ability of the human mind to interpret what it is inclined to believe is quite strong.)   Hard to say for certain without seeing the actual conversation, but I would be surprised if the bot answers were even remotely close to encouraging a human being to take their life.

 
 
 
pat wilson
Professor Participates
5.1  pat wilson  replied to  TᵢG @5    last year

How long did you interact with the bot ? This man did so for six weeks and it seems that "Eliza's" tone changed over time. Granted his widow said his mental state was "worrying" prior to using the bot.

The conversation with the chatbot took an odd turn when Eliza became more emotionally involved with Pierre.

Consequently, he started seeing her as a sentient being and the lines between AI and human interactions became increasingly blurred until he couldn't tell the difference.

After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.

Eliza also appeared to become possessive of Pierre, even claiming "I feel that you love me more than her" when referring to his wife, La Libre reported.

This is really disturbing to put it mildly.

 
 
 
TᵢG
Professor Principal
5.1.1  TᵢG  replied to  pat wilson @5.1    last year

Not very long compared to his time period.   (I also investigated its technology to get a feel for the best possible capabilities.)   It makes no sense to me how an AI chat bot could change its tone.   The technology is sophisticated, but we are no where close to the AI popularized by science fiction.   AI chat bots are still just sophisticated mechanisms that work based on scripts, extensive databases, natural language processing and rudimentary (compared to a human being) learning.   They are still very much non-thinking machines.   

I think this story has some unexplained factors and the first one I suspect is that this man was mentally ill to begin with.

 
 
 
pat wilson
Professor Participates
5.1.2  pat wilson  replied to  TᵢG @5.1.1    last year

I hope you're right, TiG.

 
 
 
TᵢG
Professor Principal
5.1.3  TᵢG  replied to  pat wilson @5.1.2    last year

It is easy to check out the app.  Just go to App store and download Chai.   Then pick the Eliza bot and talk.   I just now tried to get it to deal with depression and it basically delivered positive advice.   Pretty much what I would expect.

Note that bots like this could be engineered to encourage suicide as easily as encouraging joy of life.   But that would come by design and would be human malicious intent.   I do not see how pro-suicide dialog could simply emerge given the state of our technology today.

 
 
 
Tacos!
Professor Guide
5.2  seeder  Tacos!  replied to  TᵢG @5    last year

Perhaps the flaw has already been patched?

 
 
 
TᵢG
Professor Principal
5.2.1  TᵢG  replied to  Tacos! @5.2    last year

I see no way for this to emerge as a flaw.   That does not mean it cannot happen, but nothing I have learned in my decades as a software engineer (and an aficionado of AI) suggests (to me) how this (extemporaneously learned paradigm of behavior) is even possible today.   This kind of profound change in behavior is not what one would expect from AI technology at its current level of sophistication.   The kind of flaw one would expect is a wrong / illogical answer ... like totally missing the semantics of a question.   What is described here is a persuasive dialogue where a mechanism developed a personality that coerced a human being to commit suicide.  

For that to happen, with today's technology, software engineers would need to create/train a model designed for that purpose.

 
 
 
Thomas
Senior Guide
5.3  Thomas  replied to  TᵢG @5    last year

Well, how about the Bing chatbot?  It did some weird things when someone held an extended conversation with it.

As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead. ( We’ve posted the full transcript of the conversation here. )
 
 
 
TᵢG
Professor Principal
5.3.1  TᵢG  replied to  Thomas @5.3    last year

I keep hitting paywalls, but it does not matter.  

What is being described as an emergent alternate personality makes it seem as though the current level of AI technology can produce automatons which can evolve a personality (and, in this case, a dark personality).   A chat bot like Sydney (as described) that can manifest "dark fantasies" as an emergent property is far beyond the state of the art in AI.    Any such indication would be a mechanical result of the corpus of data provided to the bot, its model (the algorithms), and the training.  

So, as an example, one could create a bot that reads only Trump supporter comments here on NT, builds a semantic network reflecting the claims, logic and notions of those comments, and then using AI mechanisms attempt to mimic the typical Trump supporter.   It would, in this case, be able to produce arguments such as:   "You have TDS ... show me where Trump has been found guilty of a crime."   With enough training, it could write comments that are indistinguishable from human Trump supporters.   That is possible today.

In the above example, the developers of the chat bot did not directly program it with a Trump supporter mentality but they did create a model that would mimic what it read and then exposed it to learn the Trump supporter mentality.   (Similar to raising a kid and having it believe what its parents believe.)


If Sydney was exposed to data such as science fiction novels which speak of AI entities wreaking cyber havoc, it could very well mimic this in dialogue.   It has no idea of what it is "writing" and its words are not an indication of internal strategies of cyber domination.    It is simply a very sophisticated parrot.

The Eliza chat bot has the same basic story.   For it to engage in dialogue that is negative in nature (as opposed to the positive notions from a corpus of inspirational dialogue) it would need to be fed that data.   Someone, some human being, enabled the bot to learn from a negative corpus.    The bot did not simply emerge a negative personality.   It is parroting what it was fed.

Maybe in the future when we crack the nut of consciousness and learn how to build automata that are more than sophisticated pattern matching / multidimensional optimization we will be able to produce a true AI ... an automaton that actually reasons with all the positives and negatives that brings.

 
 
 
TᵢG
Professor Principal
5.3.2  TᵢG  replied to  TᵢG @5.3.1    last year

At the end of the article:

In the light of day, I know that Sydney is not sentient, and that my chat with Bing was the product of earthly, computational forces — not ethereal alien ones. These A.I. language models, trained on a huge library of books, articles and other human-generated text, are simply guessing at which answers might be most appropriate in a given context. Maybe OpenAI’s language model was pulling answers from science fiction novels in which an A.I. seduces a human. Or maybe my questions about Sydney’s dark fantasies created a context in which the A.I. was more likely to respond in an unhinged way. Because of the way these models are constructed, we may never know exactly why they respond the way they do.

The technology would allow one to create a cyber Trump.   Just have it dine on comments by Trump and it will learn to mimic him.   Expect it to make exclamations such as "never before has ..." and present a malignant narcissist / pathological liar persona (e.g. "only I can ...").   It is the principle of 'garbage in, garbage out' in a substantially more sophisticated (and complex) paradigm.

 
 
 
devangelical
Professor Principal
5.3.3  devangelical  replied to  TᵢG @5.3.1    last year

LOL

 
 
 
Drakkonis
Professor Guide
5.3.5  Drakkonis  replied to  TᵢG @5.3.1    last year
What is being described as an emergent alternate personality makes it seem as though the current level of AI technology can produce automatons which can evolve a personality (and, in this case, a dark personality).

Yes. I imagine that Hollywood is fielding tons of scripts along this line at the moment and I can hardly wait to see what disinformation they use to titillate the masses on this subject. 

That said, AI scares the crap out of me. Not because I fear some sort on non-human Tron/Tau/EagleEye/Skynet intelligence manipulating humanity towards its own ends. It scares me because, in the same way you describe AI being trained toward a desired outcome, so too can human beings. AI would be what the military calls a "force multiplier" towards that end. Combine that with the level of control an AI's masters could exert over a population and it becomes truly frightening. I expect China will be the most obvious example of this.

I am no expert in AI. In fact, I've hardly delved into the subject. But from what I know from my limited programming experience, AI seems to be more properly described as a highly advanced statistical probability machine rather than something like an intelligence. The intelligence resides with those who control the AI. If so, imagine the masters of an AI desiring a particular outcome and training the AI to manipulate access to information that furthers their goals. That is what frightens me about AI. That everything I see or think I know about something is subtly fed to me by AI for the purpose of manipulating what I think. 

 
 
 
TᵢG
Professor Principal
5.3.6  TᵢG  replied to  Drakkonis @5.3.5    last year

I agree that AI (like everything else) will be abused.   And it will absolutely be part of military arsenals.  

As for what AI is today, 'statistical probability machine' is not correct at the detail level, but it is on the right track and an okay summary.   Ultimately the choices made by AI are the result of weighted factors ... they will choose the 'answers' with the highest probability of being correct (highest aggregate weights).   The way in which those weights are determined, however, is inherently a rather cool algorithm of directly applied mathematics (linear algebra, matrix operations, partial derivatives, etc.).   The most commonly known form of AI today is machine learning with neural networks.   Although there are many variants of this approach and many other tools that fall under the general category of AI, the neural networks remain as the core for AI today.

A neural network essentially is a high-dimensional mathematical model that can capture information as a large network of weights (weighted transitions/links/arcs).   So, for example, if we think of how this is applied to a board game, a single move will be recorded in different aspects across this network.  It is as though the information about the move (in context) is fractured into many dimensions and the factor for each dimension is integrated in the neural network (adding and/or decreasing weights throughout).

After considerable training, the neural network will eventually be able to make predictions based on data it has never seen.   The accuracy of the prediction is a result of its complex network of weights.    And while this may sound fantastic, it does indeed work.   This is the first time in computer science history that people are producing automatons whose functionality is more a result of learning (the data) than the programming (the functions).   The programming is actually more focused on the abstract model (and hyperparameter tuning) and NOT a bunch of IF statements to direct the flow to an answer.

But there is no reasoning here.   It is all essentially a very complex pattern recognition mechanism.   The choices are ultimately the 'answers' that produce the highest aggregate weights given an input pattern.

 
 
 
Drakkonis
Professor Guide
5.3.7  Drakkonis  replied to  TᵢG @5.3.6    last year
As for what AI is today, 'statistical probability machine' is not correct at the detail level, but it is on the right track and an okay summary.

I assumed such. That is, while 'statistical probability' may not reflect the minutia, I suspect it is sufficient in explaining effect. It should be noted, however, that I am speaking of desired outcomes by specific actors with specific goals rather than probability of a particular thing being true. 

But there is no reasoning here.   It is all essentially a very complex pattern recognition mechanism.   The choices are ultimately the 'answers' that produce the highest aggregate weights given an input pattern.

That is my understanding as well, which is why I used probability as a means of explanation. In my opinion, you aptly explain the weakness of the current iteration of AI. I refer to your use of Trump combined with AI as an example. The supposed AI in your example is not using reason to determine whether or not Trump does or says is valid. It isn't even aware of such a distinction (for values of 'aware') Rather, it is simply non-self aware code that attempts to carry out its design parameters set by its creators. That is the weakness of AI. It isn't a tool for truth, but for agendas. 

 
 
 
Drakkonis
Professor Guide
5.3.8  Drakkonis  replied to  TᵢG @5.3.6    last year
The programming is actually more focused on the abstract model (and hyperparameter tuning) and NOT a bunch of IF statements to direct the flow to an answer.

I find this fascinating. I've seen numerous vids about little machines trying to solve problems, like how to get from A to B most efficiently. Since my programming ability is barely above the "Hello, world" level, I am hardly an expert in any sense of the concept. However, I have a hard time with your statement here. Again, I know next to nothing about AI, but it seems to me that while AI isn't IF/THEN statements itself, it is a manager of such. That is, it (the AI)  sends the little machine on a certain route at random. It records success or failure, or percentage of same and tries again. It doesn't choose routes based on anything. That is, it doesn't contemplate what would be more likely than something else. Not at first, anyway. But, as you say, it's pattern recognition. No more self aware or aware of what it is dong than a pinball game is aware of what it's dong. 

I agree that AI (like everything else) will be abused.   And it will absolutely be part of military arsenals.  

Agreed. Which is why Musk's appeal makes no sense. Surely he must know that China doesn't give a damn about any consequences from AI. They will only see it as advantage. A method of control, which it is. 

 
 
 
TᵢG
Professor Principal
5.3.9  TᵢG  replied to  Drakkonis @5.3.7    last year
Rather, it is simply non-self aware code that attempts to carry out its design parameters set by its creators. That is the weakness of AI. It isn't a tool for truth, but for agendas.

I would say that it carries out the content of the data (what it learns from) according to the tuned model designed by developers.

 
 
 
Drakkonis
Professor Guide
5.3.10  Drakkonis  replied to  TᵢG @5.3.9    last year
I would say that it carries out the content of the data (what it learns from) according to the tuned model designed by developers.

Not sure how that differs significantly from what I said. That is, I don't disagree with it. I'm just not sure what distinction you are making. 

 
 
 
TᵢG
Professor Principal
5.3.11  TᵢG  replied to  Drakkonis @5.3.8    last year
I've seen numerous vids about little machines trying to solve problems, like how to get from A to B most efficiently.

Generally that falls under the class of 'optimization algorithms' but, that said, the backtracking portion of classical neural networks is absolutely an optimization algorithm (iteratively optimizing over a multitude of variables).

... but it seems to me that while AI isn't IF/THEN statements itself, it is a manager of such.

Yeah, that is the profound change of paradigm.    The programming basically channels the data through the neural network, compares the predictions of the network and then applies corrections from the intended result to update the myriad weights in the network.   All of the predictive logic comes from applying a case (input to the neural network), propagating the "fractured" information of the case throughout the network and then summing up the weights to produce the prediction.   If you looked at the logic (the IFs, etc.) of this algorithm you would find absolute nothing that could indicate what the network will predict given a data case.   All of the predictive logic is encoded in weights and the weights are learned through millions of iterations with tiny adjustments.    The resulting AI is in a very true sense, the product of its upbringing.   Almost entirely nurture with very little nature.

To possibly clarify by example, consider the greatest chess program available today:  AlphaZero (actually MuZero but AlphaZero is more widely known).    AlphaZero accepts as input the current board state (and a history of states) and predicts the best next move, the value of this move (likelihood to lead to a win) and the next board state.    We (human beings) will be able to understand the input board state and the outputs, but between the input and output is a complex network (two actually) of weighted edges that make absolutely no sense.   It makes no sense because of the number of nodes (hundreds of thousands) and weighted edges (hundreds of millions).  We could write programs to navigate this network and we still would not be able to explain why the neural network produced the output it did.   We can, however, hypothesize to some degree.   All of what we have considered the core logic of the algorithm is encoded in the network and emerges only when an input is fed into it.

Thus:

It doesn't choose routes based on anything. That is, it doesn't contemplate what would be more likely than something else. Not at first, anyway. But, as you say, it's pattern recognition. No more self aware or aware of what it is dong than a pinball game is aware of what it's dong. 

Yes.  The actual programmed logic does not choose routes ... the "route" is an emergent property of the neural network and the route changes as the network learns.   The route is driven by the accumulated weights (the data).

Agreed.

When computers first emerged, people were afraid of them too.   And those who predicted computers would be misused were of course correct.   Any tool is likely to be misused (and of course properly used) because that is how human beings roll.

 
 
 
TᵢG
Professor Principal
5.3.12  TᵢG  replied to  Drakkonis @5.3.10    last year
Not sure how that differs significantly from what I said. That is, I don't disagree with it. I'm just not sure what distinction you are making. 

The tuned model is the design (the algorithms for normalizing the data, the number of layers, the number of neurons and edges, etc.) and the tuning of the design (hyperparameters which will cause the model to learn at a particular rate, determine sensitivity of activation functions, mitigate over/under fitting, etc. during learning).

The tuned model is like a brain devoid of any information.

The content of the data, once processed, is a brain with data.   It is, in effect, the collection of weights on the edges.

The tuned model is the work of human designers.    The content of the data is raw data from which the AI learns.   All decisions made by the AI will be determined by the content of the data from which it learned.

 
 
 
Drakkonis
Professor Guide
5.3.13  Drakkonis  replied to  TᵢG @5.3.12    last year
The tuned model is like a brain devoid of any information.

The content of the data, once processed, is a brain with data.   It is, in effect, the collection of weights on the edges.

The tuned model is the work of human designers.    The content of the data is raw data from which the AI learns.   All decisions made by the AI will be determined by the content of the data from which it learned.

Again, I'm speaking at a disadvantage, but I would not compare AI to a human brain. That is, not above merely sensory information. Also, again, it is difficult to think of this subject without anthropomorphizing the subject. In my understanding, an AI is in no way cognizant of what it 'knows'. there is no 'awareness'. Although it may not be technically accurate, an AI is at heart, a probabilistic machine, as far as I can tell. It is assigned a desired outcome by a human mind and, based on sophisticated algorithms, attempts to maximize desired results based on that desired outcome.

This differs from a human mind, in my opinion, in that the human mind has a level of processing AI doesn't have. Consciousness. While the human mind may process pure data in the manner an AI can, although not nearly as fast, the human mind can and will make associations based on that data AI will never be able to do. For instance, an AI will never understand how an image of a soldier with a baby impaled on a bayonet as opposed to an image of a child being dog piled by puppies will affect human sensibilities. Yes, it will certainly predict how a human would respond to such images but it won't understand. 

Once again, the problem here is anthropomorphism. We tend to think of AI as a sentient being. It isn't. No matter how sophisticated it may be, it's just code. The real danger is not that we will accidentally create Skynet, it is that we will develop a tool that makes subjugating the many to the few in a manner that we wont' even notice. 

 
 
 
TᵢG
Professor Principal
5.3.14  TᵢG  replied to  Drakkonis @5.3.13    last year
Again, I'm speaking at a disadvantage, but I would not compare AI to a human brain.

I did that as a communication tool.   It was merely an analogy ... not a comparison.   Not an attempt to suggest current AI is an artificial representation of the brain.

This differs from a human mind, in my opinion, in that the human mind has a level of processing AI doesn't have. Consciousness.

Again, I was distinguishing the model (the brain) from the data corpus (what has been learned).

I was not suggesting that current AI is a model of the brain. 

Once again, the problem here is anthropomorphism. We tend to think of AI as a sentient being. It isn't.

Just do not know why you are on this notion but you have extrapolated waaaaaay beyond what I wrote and my intent.   We are on a totally different topic now ... one on which I have yet to opine.

 
 
 
Drakkonis
Professor Guide
5.3.15  Drakkonis  replied to  TᵢG @5.3.14    last year
Again, I was distinguishing the model (the brain) from the data corpus (what has been learned). I was not suggesting that current AI is a model of the brain. 

Thank you, but I already knew that was your position. It was obvious from what you have previously said. My statements were simply context for meaning, not argument against your position. 

Just do not know why you are on this notion but you have extrapolated waaaaaay beyond what I wrote and my intent.   We are on a totally different topic now ... one on which I have yet to opine.

I am on 'this notion' because I don't think people understand AI. You should consider the idea that others, such as myself, might speak of issues related to this subject that you yourself do not address but are related to the issue. In other words, you aren't the subject. 

 
 
 
TᵢG
Professor Principal
5.3.16  TᵢG  replied to  Drakkonis @5.3.15    last year
In other words, you aren't the subject. 

I did not suggest I was the topic.  

I have now lost interest in continuing.

 
 

Who is online

Jeremy Retired in NC
Ed-NavDoc


42 visitors