╌>

Daaayum that robot can move

  

Category:  Health, Science & Technology

By:  hal-a-lujah  •  2 weeks ago  •  57 comments

Daaayum that robot can move


Tags

jrDiscussion - desc
[]
 
Hal A. Lujah
Professor Guide
1  author  Hal A. Lujah    2 weeks ago

Pretty cool video.  I predict that one day there will be dance competition shows where the participants are all robots doing shit humans can’t even fathom.

 
 
 
devangelical
Professor Principal
1.1  devangelical  replied to  Hal A. Lujah @1    2 weeks ago

there was a local story recently that the denver PD is using some type of 4 legged robotics ...

 
 
 
Buzz of the Orient
Professor Expert
1.2  Buzz of the Orient  replied to  Hal A. Lujah @1    2 weeks ago

I can't open your youtube video but here is an image of Chinese robots dancing with human dancers at the New Year's Eve Spring Festival TV Gala last week. It was amazing.  Not only were they actually dancing, but in absolutely perfect unison.

wu_1iimi4smc1agg1ald1et26db1j2m5lo-1920.png

 
 
 
Hal A. Lujah
Professor Guide
1.2.1  author  Hal A. Lujah  replied to  Buzz of the Orient @1.2    2 weeks ago

Sorry Buzz - maybe you can search “The Lynx wheeled quadruped from Deep Robotics” and find a permissible version of it.  The robot has what looks like Chinese writing on top of it.

I watched a video of the dancers in your post and it is mind blowing.  China appears to be miles ahead of us in AI robotics.  I know you can’t watch this, but everyone should watch that video so I’m posting a YouTube link showing it.  The video is about 7 minutes long and is a news report.

Now imagine 50 of these things with machine guns and a plan.  Life imitating art.

 
 
 
Buzz of the Orient
Professor Expert
1.2.2  Buzz of the Orient  replied to  Hal A. Lujah @1.2.1    2 weeks ago

What it brings to mind are all those Robot Cop movies - that may not be science fiction much longer. 

 
 
 
devangelical
Professor Principal
1.2.3  devangelical  replied to  Hal A. Lujah @1.2.1    2 weeks ago

drone army ...

 
 
 
Dig
Professor Participates
1.2.4  Dig  replied to  Hal A. Lujah @1.2.1    2 weeks ago
Now imagine 50 of these things with machine guns and a plan.  Life imitating art.

You just know the military is chomping at the bit for something like that. Probably already in the works. Imagine the trench- or building-clearing ability of an up-armored robot as agile and sure-footed (wheeled?) as the DEEP Robotics Lynx in your video.

If AI could make them mostly autonomous once deployed, not requiring a controlling radio signal that's susceptible to jamming, they'd be a terror to behold on the battlefield. We're fast getting into Terminator or Cylon territory. The stuff of nightmares. 

 
 
 
TᵢG
Professor Principal
1.2.5  TᵢG  replied to  Dig @1.2.4    2 weeks ago

There is nothing stopping us from making robots which can operate without a central command (and thus be free of jamming).   What they might do, now, is a function of military tactics.   They could operate similar to human beings whose communications have been cut.  Less capability of course, but similar.

Also, if captured and tampered with, they could self-destruct. 

I would expect these to be quite valuable for ground-level reconnaissance.

 
 
 
CB
Professor Expert
1.2.6  CB  replied to  Dig @1.2.4    2 weeks ago
We're fast getting into Terminator or Cylon territory. The stuff of nightmares. 

The future can not be halted. There must always be those futurist-minded thinkers required to off-set the genius evildoers and unethical achievers of 'tomorrow.' :)

Btw, I am sure it won't take long at all before we learn of the 'evil' AI has and is being put! 

We, our nation, has no time for national fracturing. . . this is the moment for cohesion so all that is best about our country can utilized and not turned into "spoilage" of minds, talent, and skills.

 
 
 
Dig
Professor Participates
1.2.7  Dig  replied to  TᵢG @1.2.5    2 weeks ago

I'm thinking they might be difficult to defeat in environments with plenty of cover, being like miniature walking tanks that can go anywhere a person can. I hope someone's working on that. Maybe a similar kind of robot, but much cheaper and more like a suicide bomber could do the trick. Kamikaze-style anti-robot robots, lol.

 
 
 
Thomas
PhD Guide
1.2.8  Thomas  replied to  Buzz of the Orient @1.2    2 weeks ago

Well, not perfect unison... I noticed that the robots were spinning their red and white hankies at a higher rpm than the ladies were. ;)

 
 
 
Buzz of the Orient
Professor Expert
1.2.9  Buzz of the Orient  replied to  Thomas @1.2.8    2 weeks ago

LOL.  I meant in unison with each other, not with the ladies.

 
 
 
TᵢG
Professor Principal
2  TᵢG    2 weeks ago

Modern AI reinforcement learning.

 
 
 
Trout Giggles
Professor Principal
2.1  Trout Giggles  replied to  TᵢG @2    2 weeks ago

I need AI to help me to get around like that

 
 
 
TᵢG
Professor Principal
2.1.1  TᵢG  replied to  Trout Giggles @2.1    2 weeks ago

Plus a few servo motors.

 
 
 
Trout Giggles
Professor Principal
2.1.2  Trout Giggles  replied to  TᵢG @2.1.1    2 weeks ago

Can they replace my hip bones with those?

 
 
 
TᵢG
Professor Principal
2.1.3  TᵢG  replied to  Trout Giggles @2.1.2    2 weeks ago

Some day ... jrSmiley_82_smiley_image.gif

 
 
 
Thomas
PhD Guide
2.2  Thomas  replied to  TᵢG @2    2 weeks ago

Does each one have to learn individually or can the learning from one be passed on to others???

 
 
 
TᵢG
Professor Principal
2.2.1  TᵢG  replied to  Thomas @2.2    2 weeks ago
Does each one have to learn individually or can the learning from one be passed on to others???

Only one need learn.   All of the knowledge is 'stored' as weights (coefficients) in the trained neural network.   So once one automaton learns, one copies the neural network (the weights) into the other and it is ready to go.

Assuming we are talking about identical robots.   Given variances that occur in matter, I can see the need for some subsequent fine-tuning if the tiny differences in the machines manifest as faulty behavior.  In other words, the more precisely engineered the automaton, the more likely a variance could affect behavior.   Would have to research this to see how often this is required today, but I think it would be more the exception than the norm.   We can manufacture with great precision.

 
 
 
Robert in Ohio
Professor Guide
3  Robert in Ohio    2 weeks ago

AI and robotics will soon be manifest in all aspects of our lives

 
 
 
Drakkonis
Professor Guide
3.1  Drakkonis  replied to  Robert in Ohio @3    2 weeks ago
AI and robotics will soon be manifest in all aspects of our lives

Sadly, that's likely true. 

 
 
 
TᵢG
Professor Principal
3.1.1  TᵢG  replied to  Drakkonis @3.1    2 weeks ago

Why is this sad?  

We have all sorts of technology today.   Are you sad that we are all able to communicate so effectively given the internet and smart devices?

Are you really sad that robotics have enabled massive improvements in manufacturing?

AI is just another form of technology.

 
 
 
Drakkonis
Professor Guide
3.1.2  Drakkonis  replied to  TᵢG @3.1.1    2 weeks ago
Why is this sad? ... AI is just another form of technology.

Because humans are corrupt. Because of that, what we create will be corrupt. 

The people driving AI aren't really concerned with humanity. What they are concerned with is profit, power and control. Every government on earth is trying to figure out how to use AI to do unto others before they do unto them. Every business is trying to figure out how they can eliminate as many people in their process as they can by utilizing AI. 

Nor is it just another form of technology. People already treat it as if it is a being. People already treat it as if it will be some sort of god, eventually answering all our questions. People already establish relationships with it as if it were a person. They're already producing sex robots with AI. 

In other words, imagine the worst excesses of humanity, powered by AI. And, since AI is not self-aware, doesn't have a conscience, doesn't feel but, in fact, is just a program, it won't protect us from ourselves but, rather, enable the worst parts of ourselves. Call me excessively pessimistic, but I submit human history as evidence of my position. 

 
 
 
TᵢG
Professor Principal
3.1.3  TᵢG  replied to  Drakkonis @3.1.2    2 weeks ago
Because of that, what we create will be corrupt.

True for any advancement.   So what do you wish ... that we cease all advancement?

The people driving AI aren't really concerned with humanity.

Where do you get this idea?   Are the people who created the internet not concerned about humanity?   How do you know what all these people (many independent groups working loosely together) are thinking?

What they are concerned with is profit, power and control.

You think they cannot ALSO be responsible?   Apparently you have never bothered to research this, but the AI community is quite concerned with the best way to develop this technology safely.

Every business is trying to figure out how they can eliminate as many people in their process as they can by utilizing AI. 

As is true for manufacturing machinery.   That is how capitalism works, Drakk.   Don't blame AI, blame the people.   Don't pretend that AI advancements are any different from any other major technological advancement such as the Internet.  

People already treat it as if it will be some sort of god, eventually answering all our questions.

Eventually answering all of our questions is far-fetched.   But being able to answer any question that has been answered by human beings ... yes.   Being able engage in advanced reasoning and discover answers to questions based on available data ... yes.  

No way to prevent people from ignorance except to teach them over time.   Those who think ChatGPT, for example, is godlike will eventually get it.   People used to think that the weather was all acts of gods but we eventually figured it out and I am pretty sure the super-super majority of human beings all understand that thunder is not an angry Zeus.

In other words, imagine the worst excesses of humanity, powered by AI.

As with any other technology AI can and will be abused.   Imagine also abuse of nuclear weapons.   Imagine abuse of chemical warfare.   There are all sorts of powerful technologies out there and we can imagine all sorts of abuse.   And we do ... and we work to mitigate that abuse.

Same with AI.

And, since AI is not self-aware, doesn't have a conscience, doesn't feel but, in fact, is just a program, it won't protect us from ourselves but, rather, enable the worst parts of ourselves.

Do you think that a man-made virus designed to kill off ⅓ of humanity has a conscience?    Frankly, I am far more concerned about that than I am of AI.   AI can be controlled.   We should all be quite clear as to how easily a virus could fuck up humanity.

 
 
 
Drakkonis
Professor Guide
3.1.4  Drakkonis  replied to  TᵢG @3.1.3    2 weeks ago
Where do you get this idea?  

History.

Are the people who created the internet not concerned about humanity?   How do you know what all these people (many independent groups working loosely together) are thinking?

Not necessary to know. Again, history shows how it all gets used, regardless of intent. 

You think they cannot ALSO be responsible?   Apparently you have never bothered to research this, but the AI community is quite concerned with the best way to develop this technology safely.

Who's "they"? Those creating AI or those funding it? Do you think their goals are the same? 

As is true for manufacturing machinery.   That is how capitalism works, Drakk.   Don't blame AI, blame the people.   Don't pretend that AI advancements are any different from any other major technological advancement such as the Internet. 

I am blaming people as it is people who are creating and utilizing AI. As should be clear by this point, AI is not a person and, I believe, never will be. I would no more blame AI than I would a nuke or a gun for the actions people take with them. 

People used to think that the weather was all acts of gods but we eventually figured it out and I am pretty sure the super-super majority of human beings all understand that thunder is not an angry Zeus.

Weather is an act of God. That we understand how He makes it come about doesn't change that. I find it so strange that you cannot recognize this, given the subject of this discussion. AI did not create itself. We did. It operates according to the parameters we set, even if it acts in ways we didn't predict. It would not, therefore, mean that should AI progress to a point where it no longer is dependent on human input that humans don't exist or are the explanation for AI existence. Knowing how an engine works doesn't preclude the necessity for the engineer. 

As with any other technology AI can and will be abused.   Imagine also abuse of nuclear weapons.   Imagine abuse of chemical warfare.   There are all sorts of powerful technologies out there and we can imagine all sorts of abuse.   And we do ... and we work to mitigate that abuse.

Yes, but that we need to do so tells us what we need to know about humanity. 

Do you think that a man-made virus designed to kill off 1/3 of humanity has a conscience?    Frankly, I am far more concerned about that than I am of AI.   AI can be controlled.   We should all be quite clear as to how easily a virus could fuck up humanity.

I'm quite aware. I've been telling anyone who I thought would listen this very same thing for decades. I don't recall the name of the story but I remember one where tech was advanced enough where people could just print viruses to order. With things like AI and CRISPR, we really aren't all that far from such a reality. 

In any case, you don't appear to understand my argument. I am generally against AI, but not for the technology itself. I am against it because of the nature of humanity. A hammer is simply a tool, yet they have been used to end the lives of countless people. AI is a more sophisticated hammer and will be used the same way. It's human nature. Consider China. It already surveils it's own citizens for compliance to the CPP's political will on a massive scale. Imagine that backed up with AI? Can you not help but think of the novel 1984? 

 
 
 
Hal A. Lujah
Professor Guide
3.1.5  author  Hal A. Lujah  replied to  Drakkonis @3.1.4    2 weeks ago

Debbie Downer alert.

 
 
 
Thomas
PhD Guide
3.1.6  Thomas  replied to  Drakkonis @3.1.2    2 weeks ago

Eurythmics- Missionary Man

 
 
 
Thomas
PhD Guide
3.1.7  Thomas  replied to  Drakkonis @3.1.4    2 weeks ago

Every advancement in technology that has been touted as a "Labor saving device" or a "productivity enhancer" over the past 75 years has seemed to convert itself into a money making system for those who already have enough, while not decreasing the amount of actual work that the working people in whatever country have to perform. 

 
 
 
Robert in Ohio
Professor Guide
3.1.8  Robert in Ohio  replied to  Drakkonis @3.1    2 weeks ago

Drakonis

AI and robotics will soon be manifest in all aspects of our lives

Sadly, that's likely true. 

There is nothing to be sad about.

Ai and robotics are not to be feared, but rather embraced - like other aspects of our societyit is important that Ai and robotics not be allowed to be abused or misused by bad actors.

both of these advances can provide great benefit and advances to our society,

 
 
 
TᵢG
Professor Principal
3.1.9  TᵢG  replied to  Drakkonis @3.1.4    2 weeks ago
History.

Then I recommend you start paying attention to the present and the serious work the AI community is undertaking to grow this technology responsibly.

It would not, therefore, mean that should AI progress to a point where it no longer is dependent on human input that humans don't exist or are the explanation for AI existence.

Try to not think so much of The Matrix during this discussion.

Yes, but that we need to do so tells us what we need to know about humanity. 

You are not giving the AI community any credit.

I am generally against AI, but not for the technology itself. I am against it because of the nature of humanity.

Yeah I understand your argument.   So by your strictly negative reasoning, you should be against any advancement that could be used nefariously ... pretty much everything.   Not a healthy way to operate IMO.   Better to recognize the good and the bad and be vigilant against the bad.

Can you not help but think of the novel 1984? 

I do not need to.   I am quite aware that there is a good and bad to advancement.   The difference between you and me is that I do not argue against advancement because of the dark side but rather argue for the good side and the good agents who combat those who would exploit the technology for nefarious purposes.

Bringing this down to simple terms.   I support intelligent personal devices AND also support the forces continually working on ensuring these devices are secure.   I am not inclined to stick with a land line simply because it is far more difficult to hack.

 
 
 
Krishna
Professor Expert
3.1.10  Krishna  replied to  Drakkonis @3.1    2 weeks ago
AI and robotics will soon be manifest in all aspects of our lives
Sadly, that's likely true. 

OPEN THE POD BAY DOORS HAL!

 
 
 
Drakkonis
Professor Guide
4  Drakkonis    2 weeks ago

I watched this vid and others concerning robots. To me, it's like marveling about a toaster. I understand why some people experience something like awe concerning them but it's misplaced, in my opinion. Might as well be in awe of a hammer because it can drive metal into wood. 

What I truly appreciate is the minds behind the robot. The robot is just a collection of metal, plastic and whatever else, but it does what it does because of a human mind. It has not the slightest trace of self-awareness. In that regard, it may as well be a rock. But the minds behind it? That's truly impressive.  

 
 
 
Hal A. Lujah
Professor Guide
4.1  author  Hal A. Lujah  replied to  Drakkonis @4    2 weeks ago

I disagree.  These robots are not rocks and hammers, they are incredibly precise instruments doing incredibly precise operations.  Two things can be true at the same time.

 
 
 
Drakkonis
Professor Guide
4.1.1  Drakkonis  replied to  Hal A. Lujah @4.1    2 weeks ago
I disagree.  These robots are not rocks and hammers, they are incredibly precise instruments doing incredibly precise operations.  Two things can be true at the same time.

I understand what you're saying. I've spent time watching vids of CnC machines milling complicated steel parts and I marvel at the machine... at first. But then, I wonder what I would have to know, the things I'd have to learn, in order to build that machine. Once I do that, it suddenly becomes a toaster for me. I see through the machine and to the minds that created it and that is what truly impresses me. I think about that man, huddled by his fire in a cave in some distant past, knapping his stone tools and think "this guy has skills and understanding I don't have" and that, over the course of time, his skills and understanding, limited as it was, morphed into this CnC machine. 

That is what is truly amazing for me. God created us in His image and we are able, to an extent, create. We have the whole universe as our playground, and we can do truly amazing things with it.  

 
 
 
Hal A. Lujah
Professor Guide
4.1.2  author  Hal A. Lujah  replied to  Drakkonis @4.1.1    2 weeks ago

God created us in His image

Opinions do vary.

 
 
 
Krishna
Professor Expert
4.1.3  Krishna  replied to  Drakkonis @4.1.1    2 weeks ago
God created us in His image

Many people feel She did...but opinions do indeed vary!

 
 
 
TᵢG
Professor Principal
4.2  TᵢG  replied to  Drakkonis @4    2 weeks ago
To me, it's like marveling about a toaster.

I marvel at the accomplishment.   What it took to develop motors that were so durable, light, and responsive.   And then the AI work wherein the robot learned about its own body in various terrains to the point of being able to deal with terrains it never saw.   That is very impressive work.

You also appreciate the accomplishment, yet equate it to marveling about a toaster merely because the accomplishment does not have a human-like self-awareness.

I find that strange.   Why is that so important?    What will you find so different when we have self-aware AI and beyond?    To me that is just another function, what is it to you?

 
 
 
Drakkonis
Professor Guide
4.2.1  Drakkonis  replied to  TᵢG @4.2    2 weeks ago
I find that strange.   Why is that so important?    What will you find so different when we have self-aware AI and beyond?    To me that is just another function, what is it to you?

I don't believe we will ever have self-aware AI. You describe the robot as learning about its own body but I don't believe it did anything like that. It was just lines of code written by self-aware humans which we anthropomorphize as something like an individual. The physical reality of the robot is irrelevant, as one can take the code, modify it slightly and upload it into some other frame and get some similar result. That is, take the code for any machine we see in these vids, modify it to work in a dump truck and we'd see the same sort of thing. Not because the code itself is a mind, but because of the minds behind the code. 

That is why I see these robots as toasters. Amazing toasters to be sure, but still toasters. The praise belongs to the creators. For me, it's the same for anything. People, mostly dudes, drool over supercars like the McLaren but I "drool" over the people and minds that created it. The skills. I would much rather have the abilities of the creators of that car than the car itself. Put another way, it's the people behind the car that impress me, not the car itself. 

You also appreciate the accomplishment, yet equate it to marveling about a toaster merely because the accomplishment does not have a human-like self-awareness.

Merely? The accomplishment only exists precisely because of the human self-awareness and its continued existence depends upon it. One could come up with some Von Neuman argument as to why that may not always be true but it remains that, even then, the process was started by a human mind. A probe isn't likely to encounter the same environment in any system it finds itself and would need to adapt to the conditions present to replicate itself. The only reason it would be able to do so is because a human mind created the potential for that adaptability. A very clever toaster, in other words. 

Why is that so important?

So that our creations do not become our gods, mostly. Or, conversely, allow us to become gods. 

 
 
 
TᵢG
Professor Principal
4.2.2  TᵢG  replied to  Drakkonis @4.2.1    2 weeks ago
You describe the robot as learning about its own body but I don't believe it did anything like that.

It most certainly did Drakk.   The primary AI technique here is called Reinforcement Learning.   This is where the robot is set with goals such as "maintain balance" and is given a terrain (digital at first) to deal with.   Like an infant it will first stumble around, but as it continues to pursue its objectives it absolutely learns.    Modern robots are NOT programmed by individuals to gain the balance and dexterity you see.   This is learned behavior.   Programmers will write the control scripts for the demonstration ... they do not write code telling the robot how to react to the various forces on its body based on its movement and the contours and material properties of the terrain.

Merely? The accomplishment only exists precisely because of the human self-awareness and its continued existence depends upon it.

Yes merely.   You presume so much (incorrectly ... profoundly so) about the AI involved here with your entirely wrong belief that this is all pre-programmed.  Having an automaton that is aware that it is an entity distinct from its environment with capabilities and the ability to employ those capabilities is not very difficult.   Having one that is sentient and can engage in creative reasoning would be quite an achievement.

You diminish all these amazing accomplishments merely because the automaton does not yet have human self-awareness.   Why?

So that our creations do not become our gods, mostly. Or, conversely, allow us to become gods. 

We are already gods in this sense, Drakk.   Human beings have created marvels.   Imagine what a human being born 2,000 years ago would think of modern accomplishments such as our skyscrapers, heavy machinery, space telescopes, smartphones, the internet, ChatGPT, Messenger RNA, etc.?    We will continue to create new marvels.    At what point do human accomplishments become godlike in your view?


From a different perspective.   Is it your belief that modern AI game systems to play Go and Chess are programmed so that on each turn the next move is calculated by a human-written algorithm rather than a calculation based on synthesized data (a genuinely massive corpus) from learning?

Do you understand the difference between the sophisticated human-crafted algorithms of Stockfish (once the dominant Chess program) and that of Alpha-Zero (learned chess by only knowing if a move is valid or not ... and then playing against a version of itself until it mastered the game)?

 
 
 
Drakkonis
Professor Guide
4.2.3  Drakkonis  replied to  TᵢG @4.2.2    2 weeks ago
Modern robots are NOT programmed by individuals to gain the balance and dexterity you see.   This is learned behavior.   Programmers will write the control scripts for the demonstration ... they do not write code telling the robot how to react to the various forces on its body based on its movement and the contours and material properties of the terrain.

I understand what you're trying to say, but I don't think it's true. To explain why, imagine I had the skills to create that robot's physical reality. That is, something that is identical to the robot in the vid, but without any of the software. What will it do? Nothing. I've just made an expensive rock. 

Now assume that I've loaded it with the software necessary for it to do something. In this version of software, I have to code such that it does what I want it to do, say, pick up this item and place it there. Tediously boring over time, given the necessary range of tasks I have planned for it. So I write a program where I don't have to think of every possible scenario. I just have to code so that the machine tries every possible scenario until the parameters I have set are reached. I've coded so that the less successful scenarios are discarded and no longer attempted and the more successful ones kept. Eventually, the program finds a solution that meets the parameters. 

While this meets a basic definition of "learning" it isn't in the sense humans experience learning. A child, learning how to ride a bike, experiences something completely different in that learning than these robots do. The robot has no concept of "balance" or "dexterity" whereas the child does. The robot isn't even aware of the parameters it is programmed to meet. It is simply programmed to do something, compare the result to a parameter and adjust accordingly. It isn't actually making a decision in the sense that humans make decisions. It can't decide that its depressed at all the failures and just go sit in a corner and pout, for instance. It will just keep trying new permutations until the parameters are met. That is because a human mind created the software to behave that way. We may even be amazed at the path taken to the solution but, in the end, not only did that solution happen because a human mind made it possible, the program isn't even aware of what it has done. It doesn't even "know" it has reached the programmed parameter in the sense a human would know. No more than a light switch "knows" that it is either on or off. 

Because of that, machines like these robots are unquestionably programmed to gain the balance and dexterity we see. This is self-evident in that I can make a physical copy of these robots but they would do absolutely nothing unless I wrote the program that allows the behavior we see in these vids. We may not write specific code for balance and dexterity, but we write the code that allows the trial-and-error process we think of as "learning" for those things. 

 
 
 
TᵢG
Professor Principal
4.2.4  TᵢG  replied to  Drakkonis @4.2.3    2 weeks ago
I understand what you're trying to say, but I don't think it's true.

Good grief man, I am not lying to you.   Do some research if you think I am not telling you the truth.

To explain why, imagine I had the skills to create that robot's physical reality. That is, something that is identical to the robot in the vid, but without any of the software. What will it do? Nothing. I've just made an expensive rock. 

Yes, the software equates to the brain.   Your body without a brain would be what, exactly?

While this meets a basic definition of "learning" it isn't in the sense humans experience learning.

It also does not reflect how the robot learned either.

The robot has no concept of "balance" or "dexterity" whereas the child does. The robot isn't even aware of the parameters it is programmed to meet.

But it does!   It has sensors (just like the vestibular system in our inner ear) which give continuous feedback on balance.   And the automaton has the objective to maintain balance (as does the infant), so it most certainly knows its objective as much as infant 'knows' its objective.

It is simply programmed to do something, compare the result to a parameter and adjust accordingly. It isn't actually making a decision in the sense that humans make decisions.

What do you think human beings do when we learn to walk?   Do you think an infant understands the concept of balance?   No, it is experimenting to try to mimic what it sees its parents do.   It is not making decisions but rather learning.  The Cerebellum is actually being trained; it is 'learning' based on feedback.   You do not think that we make conscious decisions to control our balance when we walk do you?   Is walking a conscious activity or is it 'muscle memory'?

This is what is taking place with the robot.   It has the objective to maintain its balance.   It then engages in random experimentation with constant adjustments (reinforcement learning).   Over time, the robot absolutely learns how to deal with the terrains it has been exposed to and also has a mastery of its body over terrains in general.   Thus it will be able to deal with terrains it has never seen.   All of this is essentially the muscle-memory of the robot ... just like our Cerebellum holds our muscle-memory.

All learned ... very much like an infant learning to walk.

It can't decide that its depressed at all the failures and just go sit in a corner and pout,  ...

You are way off base now.   This is not about emotions, etc.   It is about the indisputable fact that this robot genuinely learned how to control its body over general terrains and was NOT simply programmed to do so.   Robots do not have emotions (yet); not the point.

We may not write specific code for balance and dexterity, but we write the code that allows the trial-and-error process we think of as "learning" for those things. 

Well that is some progress.   At least you now allow for the possibility that learning how to walk, etc. was NOT pre-programmed.   It is not that we 'may not' but rather that we 'do not' because with the progress of AI (reinforcement learning using neural networks) we 'need not' program everything.   Programming every movement is what we did in the prior century and our robots were highly limited as a result.  That simply is not how modern robots work (not the kind we are viewing here).   The sophistication of modern robots is a result of them being able to learn without being specifically programmed.

Infants do not conceive of their trial and error process, they do that because their brains are predisposed to experiment (pre-programming, if you will, as a result of evolution).  They are experimenting and their Cerebellums are making adjustments just like what the robot does when it is learning to walk.

 
 
 
devangelical
Professor Principal
4.2.5  devangelical  replied to  TᵢG @4.2.4    2 weeks ago
Your body without a brain would be what, exactly?

... maga.

 
 
 
TᵢG
Professor Principal
4.2.6  TᵢG  replied to  devangelical @4.2.5    2 weeks ago

Okay, even though I do not hold that to be true, that was clever.

 
 
 
Hal A. Lujah
Professor Guide
4.2.7  author  Hal A. Lujah  replied to  Drakkonis @4.2.1    2 weeks ago

I would much rather have the abilities of the creators of that car than the car itself. Put another way, it's the people behind the car that impress me, not the car itself. 

wth?  Makes no sense at all.  I pity you if you can’t find enjoyment in anything without having some deep philosophical justification for doing so.  Instead of being all negative about the supercar because you’re not besties with brilliant minds that created it, just enjoy the supercar.  

 
 
 
Drakkonis
Professor Guide
4.2.8  Drakkonis  replied to  TᵢG @4.2.4    2 weeks ago
But it does!

No, it doesn't. Or are you prepared to argue that it is self-aware? 

It has sensors (just like the vestibular system in our inner ear) which give continuous feedback on balance.   And the automaton has the objective to maintain balance (as does the infant), so it most certainly knows its objective as much as infant 'knows' its objective.

No, it doesn't, because there's no "It" in the sense that it is alive. As you point out, it may have systems that correlate to our own senses but those senses in ourselves are not our consciousness. Whatever program we write to enable AI, it is goal oriented. Walk 50 meters without falling over, for instance. Due to its programming, it will, without specific input, try various things until it accomplishes the goal set. It did not set the goal. It is not "aware" in the sense a human would be that it even achieved the goal. That is because there is no personality, no id, ego or superego with which to recognize the achievement of the goal. Again, it's no more aware than a light switch as to what state it is in. There is only the if/then of the programming. 

What do you think human beings do when we learn to walk?   Do you think an infant understands the concept of balance?   No, it is experimenting to try to mimic what it sees its parents do.

True, but you're missing the point. Not only do robots not understand the concept of balance, their accomplishment of balance isn't self-generated. It is achieved by the programming given by humans. That is, they only reach balance due to the programming given them. That isn't to say that a human codes for specific scenarios with AI. What they do is code for AI to try different approaches toward a goal and evaluate the results toward that goal, keeping the ones that approach the goal and discarding the ones that don't. For the ones that do, they improve on them, keeping the ones that are improvements and discard those that aren't until, eventually, the goal is reached. 

Think of limit switches. Do you think that, once triggered, the switch understands what's going on? Do you think that, because AI might try millions of permutations to reach a solution, that it understands the solution? I don't think so. I think it just matches a permutation to the desired parameter and decides whether it matches or not. And, even if it matches, it has no more understanding than a round peg has of fitting into a round hole has of fitting.  

You are way off base now.   This is not about emotions, etc.   It is about the indisputable fact that this robot genuinely learned how to control its body over general terrains and was NOT simply programmed to do so.   Robots do not have emotions (yet); not the point.

No, I'm not off base. The "learning" you describe isn't the "learning" a human understands, and that's the point. They aren't the same thing at all. I agree that AI can "learn" for values of "learn" but the term doesn't really equate to human learning. One can write code that would allow a machine to "learn" to walk without falling over but that goal would be the entirety of that machine's world, and, it wouldn't even know it, as it has no awareness. 

A child, however, may not even consciously understand that it is learning how to walk but it doesn't matter. What matters is that, on some level, the child understands that learning to walk will help make its desires more attainable. That is, the goal is more than the task itself. An AI robot, having "learned" to walk without falling over, isn't going to say "Great! Now I can realize my dream of becoming a sushi chef!" 

Well that is some progress.   At least you now allow for the possibility that learning how to walk, etc. was NOT pre-programmed.

The progress appears to be on your part, not mine. I've never stated that programmers programmed the ability to walk into AI. In fact, I specifically stated that they programmed the ability to try various solutions to the problem and compare them to the goal and make decisions based on those going forward. That you are apparently now recognizing this is progress on your part, not mine.  

What I am attempting to emphasize is that consciousness on the part of AI is not involved, as there is none. Rather, AI is simply a complicated if/then program. 

Programming every movement is what we did in the prior century and our robots were highly limited as a result.  That simply is not how modern robots work (not the kind we are viewing here).   The sophistication of modern robots is a result of them being able to learn without being specifically programmed.

I understand and agree. My point is, however, is that AI has been programmed to do exactly this by human minds. They did not create themselves. And, even if we allow them to create their successors, it will still be due to human programming. 

It is difficult to even have a conversation about this subject because it necessarily uses terms like "they", as if they were personalities. In actuality, they have no more personality than a hammer. They are not alive. They are not self-aware. 

They are experimenting and their Cerebellums are making adjustments just like what the robot does when it is learning to walk.

Incorrect. There is a universe of difference between the motives behind an infant for doing anything at all and an AI. While the process of an infant learning to walk may be more or less autonomic, it's motive for learning isn't comparable to an AI. Even if the infant doesn't consiously understand the implications of walking, it understands on a level an AI never will. 

 
 
 
Drakkonis
Professor Guide
4.2.9  Drakkonis  replied to  TᵢG @4.2.6    2 weeks ago
Okay, even though I do not hold that to be true, that was clever.

Predictable, not clever. 

 
 
 
Drakkonis
Professor Guide
4.2.10  Drakkonis  replied to  Hal A. Lujah @4.2.7    2 weeks ago
wth?  Makes no sense at all.  I pity you if you can’t find enjoyment in anything without having some deep philosophical justification for doing so.  Instead of being all negative about the supercar because you’re not besties with brilliant minds that created it, just enjoy the supercar.  

You misunderstand me. I appreciate the tech. But I find it insignificant beside the minds that created it. Why marvel at a Mclaren when it didn't create itself? Why not instead marvel at the minds that created it? After all, it only exists because they do. 

 
 
 
TᵢG
Professor Principal
4.2.11  TᵢG  replied to  Drakkonis @4.2.8    2 weeks ago
No, it doesn't. Or are you prepared to argue that it is self-aware? 

Being self-aware (in the technical sense) is different from "The robot has no concept of "balance" or "dexterity" whereas the child does.".

No, it doesn't, because there's no "It" in the sense that it is alive.

It does not have to be 'alive' to learn.   You are denying reality.   This has been proved repeatedly. 

Not only do robots not understand the concept of balance, their accomplishment of balance isn't self-generated.

Not only do infants not understand the concept of balance, the infant's accomplishment of balance is not self-generated.   It is learned muscle-memory, not cognition.  The many details learned by trial and error are synthesized in the brain of an infant similar to how they are synthesized in the neural network of the robot.  

Do you think that, because AI might try millions of permutations to reach a solution, that it understands the solution?

Where do you get the idea that the robot understands all the details of walking?   I made no such claim.   The robot 'understands' walking as much as you 'understand' walking.   The 'understanding' is low-level 'muscle-memory' in you and in the robot.   The point I made is that the robot did in fact learn this.  

The "learning" you describe isn't the "learning" a human understands, and that's the point.

It is exactly the same but with different systems. 

I've never stated that programmers programmed the ability to walk into AI. In fact, I specifically stated that they programmed the ability to try various solutions to the problem and compare them to the goal and make decisions based on those going forward. That you are apparently now recognizing this is progress on your part, not mine.  

Then why do you claim the robots are not learning?   The programmers set up a randomly initialized complex mathematical structure which is equivalent to an extremely complex nonlinear equation.   This equation has millions to billions of coefficients (weights) that are adjusted during reinforcement learning.  Initially the robot stumbles around since the equation is random and thus offers no help.   But after many trials and little tweaks to weights as the robot varies around the goal of balance, the robot learns balance, dexterity, etc.   The end result is a neural network that can translate a condition of the robot to corrective actions to maintain balance.   This is all data, not code.   This data is learned.   It was not programmed.   It was leaned just like an infant learns.   The data that allows you to walk is in your neural network too.   Same basic idea.   And you have no concept of what that data looks like or how you are able to walk on uneven surface ... you just do because it is your brain (your Cerebellum).

In short:  

  • Robot:  neural network trained by trial and error and incremental tweaks
  • Infant:  biological neural network trained by trial and error and incremental tweaks.

Neither, once they master walking, knows why they can walk or can consciously reason through how they walk.   They just can.

My point is, however, is that AI has been programmed to do exactly this by human minds. They did not create themselves.

Nobody has suggested that AI created themselves.  And infants did not create themselves either.  

In actuality, they have no more personality than a hammer.

Personality and emotions, etc. have nothing to do with this.

There is a universe of difference between the motives behind an infant for doing anything at all and an AI.

And there is a very strong similarity which I have described.   Instead of using the example of an infant to learn (and you really need to learn because you are totally off-base) you engage in the dishonest tactic of pointing out the parts of the comparison that do not apply.   Just another 'nuh-uh'.

How does a robot learn to walk?

How does a robot learn to walk?   Modern robots have a neural-network that is trained through reinforcement learning.   The neural network is equivalent to an extremely complex nonlinear equation that can contain millions to billions of coefficients (weights).   Initially the weights are random.  Thus the neural network, when given sensor facts about the position of the robot and the environment, will deliver nonsense corrections.   The robot will, in result, fall.   Each failure is an event in reinforcement learning.   The failure is measured and a complex process of refinement takes place (back propagation) to adjust the weights in the neural network.   This is an iteration of learning.   After millions of iterations, the weights reflect a synthesized understanding of the dynamics at play.   Now, when given sensor metrics, they deliver the corrections necessary for the robot to maintain balance.   Properly trained, the robot will perform perfectly on terrains it has seen and will also perform well on terrains it has never experienced.

Just like an infant leans to walk through trial and error but has no idea about physics, the notion of balance, etc. but can do so because of synthesized knowledge in its Cerebelum, so does a robot walk based on synthesized knowledge in its neural network.

The infant was not programmed with this knowledge ... but its brain had the 'programming' to learn.

The robot was not programmed with this knowledge ... but its was programmed with a mechanism to learn.

Do you think the programmers were making program modifications on each iteration?    No, the robot was leaning on its own just like an infant learns on its own through reinforcement learning.

Here is another intentionally high level example to illustrate how this essentially works.

Now you have the option to use the web to get increasingly more sophisticated and more detailed explanations of how robots learn.   Or you can continue to just argue that since robots are not the same as human beings that they do not really learn.

Drakk @4.2.1: That is why I see these robots as toasters. Amazing toasters to be sure, but still toasters.

Toasters typically do not learn.  These are not toasters.

 
 
 
Drakkonis
Professor Guide
4.2.12  Drakkonis  replied to  TᵢG @4.2.11    2 weeks ago
Being self-aware (in the technical sense) is different from "The robot has no concept of "balance" or "dexterity" whereas the child does.".

That difference is the point, TiG. No matter how incredible the AI may be, it has no self-awareness. It's a complicated hammer. It cannot learn in the sense a human can learn. While an infant and AI may use similar processes to learn to walk, to develop a sense of balance, the end results are not similar by any standard. While I do not often think of my ability to balance, when I do, I am aware of all that balance implies. An AI would not have that ability. For values of "concern" it would only be concerned with whether or not it was balanced. It would not spend one second contemplating what balance means or how it could be uses. It could not, in other words. learn those things. Even if it were totally out of balance, due to bad programming, it would consider itself balanced because it met the parameters of the program and would never question it. That is because it wouldn't have the faculty to question it. 

 
 
 
JBB
Professor Principal
4.2.13  JBB  replied to  Drakkonis @4.2.12    2 weeks ago

I am no expert, but my understanding is that some AI systems have achieved self awareness. The whole point of AI is the ability to learn and solve pro problems...

 
 
 
TᵢG
Professor Principal
4.2.14  TᵢG  replied to  Drakkonis @4.2.12    2 weeks ago
No matter how incredible the AI may be, it has no self-awareness.

Then you have been arguing for nothing since nobody has claimed that AI has self-awareness now and certainly NOT that this robot is self-aware.    The concept of self-awareness is totally different than learning how to walk.   

This robot learned how to manage its body over various terrains in the same way an infant learns.   Both use reinforcement learning (trial and error with feedback and correction) trying to optimize for balance across various terrains.

While I do not often think of my ability to balance, when I do, I am aware of all that balance implies.

You are comparing an intellectual understanding of balance to the actual balancing dynamics managed by your Cerebellum.   You do not think through every little factor while walking, that is done by 'muscle-memory' in your Cerebellum.   And for a robot, its neural network is essentially its Cerebellum.

Even if it were totally out of balance, due to bad programming, it would consider itself balanced because it met the parameters of the program and would never question it.

The balance is not a function of programming, Drakk.  It is a function of learning.   


This is pointless, you clearly do not want to learn.  You just want to argue.

Back to your opening statement, toasters are 'programmed'; they do not learn how to heat and deliver toast.    This robot, however, was not programmed how to walk.   It learned how to walk through reinforcement learning (trial and error with feedback and correction).   The more time and challenges it faced the better it got.   It started stumbling around and with practice it learned to balance itself under the rigorous conditions shown in the video.   Just as an infant learns to walk.

Bottom line, denying that machine learning is real and actually happens is foolish.

 
 
 
TᵢG
Professor Principal
4.2.15  TᵢG  replied to  JBB @4.2.13    2 weeks ago
I am no expert, but my understanding is that some AI systems have achieved self awareness. The whole point of AI is the ability to learn and solve pro problems...

Today the self-awareness that you perceive in systems like ChatGPT4o is a result of considerable post-training functionality and some very clever, complex AI programming techniques.   It is incredibly impressive but it is still mechanical.

The end output is a combination of synthesized knowledge from learning (using about 10,000 V100s (GPUs from NVidia) running 24x7 for about 3-4 months straight) coupled with post-training by human beings to make the responses more human-oriented coupled with incredibly cool software to present the results in various forms based on the subject matter and content.

The hardest part of this is the pre-training where the LLM (the knowledge behind ChatGPT) literally reads English on the web (and attached documents such as service manuals) and synthesizes this knowledge into a vast neural network consisting of about 1.7 trillion weights (coefficients).   Think of this as a complex nonlinear equation with almost two trillion little dials that are adjusted iteratively (trillions of iterations) to collectively hold all the synthesized knowledge of a vast corpus (about 570 Gb worth).

In short, ChatGPT 4o presents an amazingly impressive user experience which draws its knowledge from a LLM (large language model) that has synthesized the knowledge it learned from a corpus that is so vast that no human brain could ever come close.   The LLM is vastly superior to our brain's ability to synthesize vast amounts of data, but this 'brain' has no reasoning or other higher level functions.   All of these are simulated with clever programming and thus are not real.

 
 
 
Drakkonis
Professor Guide
4.2.16  Drakkonis  replied to  TᵢG @4.2.14    2 weeks ago
Then you have been arguing for nothing since nobody has claimed that AI has self-awareness now and certainly NOT that this robot is self-aware.    The concept of self-awareness is totally different than learning how to walk.

I made a statement, to which you presented an argument. Therefore, it is you who is making the argument for nothing. My statement was that it is the creators of AI that deserved credit, not the AI itself. You countered. Now you are trying to make out that I'm the unreasonable one. 

This robot learned how to manage its body over various terrains in the same way an infant learns.   Both use reinforcement learning (trial and error with feedback and correction) trying to optimize for balance across various terrains.

Irrelevant to the point, which is that it is simply a program designed by a human to do what you describe. 

You are comparing an intellectual understanding of balance to the actual balancing dynamics managed by your Cerebellum.

Yes, the purpose of which is to convey the fact that there isn't anything like actual awareness involved in the process. It is simply a process of if/then, without awareness. The purpose of pointing that out is, as I said, the creators of the process is far more praiseworthy than the process itself. 

The balance is not a function of programming, Drakk.  It is a function of learning.

Okay, then you necessarily have to provide the mechanism by which it learns sans programming. Please enlighten me. 

This is pointless, you clearly do not want to learn.  You just want to argue.

Right.....

Back to your opening statement, toasters are 'programmed'; they do not learn how to heat and deliver toast.    This robot, however, was not programmed how to walk.

Um, yes, it was. Not directly but it was programmed with that goal. Why do you think that it eventually learned to walk rather than make a pizza? Change a tire? Comment about the novel War and Peace? 

Rather than direct programming, such as g code for a CnC machine, what they have done is provide programming that, generally speaking, outlines the parameters of the goal, gives the capability of the machine, and code that allows it to try permutations of the possible solutions until the goal is met. The machine did not decide to do any of that. It did not learn to do any of that. It was programmed to do it and could not do anything else. You yourself provided vids that demonstrated my point. 

Bottom line, denying that machine learning is real and actually happens is foolish.

Which would be relevant if that were my point. Instead, this is just another example of you changing the subject in order to make an argument. To get us back to the point, rather than your constant attempt to reframe the argument, I will restate.

What I truly appreciate is the minds behind the robot. The robot is just a collection of metal, plastic and whatever else, but it does what it does because of a human mind. It has not the slightest trace of self-awareness. In that regard, it may as well be a rock. But the minds behind it? That's truly impressive.

Now, you can go ahead and set up AI as some sort of Golden Calf if you wish but it doesn't change the fact it is merely the construct of human hands. It isn't alive. It doesn't learn in the sense a human does. It just keeps trying permutations in order to fulfill human set goals. It doesn't care what the goals are as it cannot care. It isn't aware of the goals as it is not aware. Ultimately, it is no more than the most extreme example of an if/then statement that is allowed to provide it's own ifs and make statistical decisions on them, a capability it has because of human programming, not itself. 

All of this takes us back to the point. It is the human mind behind AI that deserves the credit, not the AI. Even if current iterations of AI create more capable iterations of AI, it's still due to the human mind that made it possible in the first place and without which it could not exist. 

 
 
 
TᵢG
Professor Principal
4.2.17  TᵢG  replied to  Drakkonis @4.2.16    2 weeks ago
All of this takes us back to the point. It is the human mind behind AI that deserves the credit, not the AI. Even if current iterations of AI create more capable iterations of AI, it's still due to the human mind that made it possible in the first place and without which it could not exist. 

We know human beings made AI possible.   You are arguing a stupid point.   And you are (of course) arguing a strawman that the AI is not alive, that it is not sentient, that it is not self-aware.   Never have I suggested otherwise.

Finally this nonsense in the quote below is what I have been trying to get you to understand as dead wrong:

Ultimately, it is no more than the most extreme example of an if/then statement that is allowed to provide it's own ifs and make statistical decisions on them, a capability it has because of human programming, not itself. 

No it is not!   You seem unwilling to recognize that we are in a new paradigm.   There are no programmers with if/then statements.   There is no code underlying the learning that one can trace through.   The automaton learns from data and synthesizes its learning in an extremely complex structure that it evolved based on the data.   We cannot look at the structure and understand what knowledge it has or why it made certain decisions.   There is no code ... there is nothing a human being created representing the learned knowledge.    You have it all wrong.

When something like ChatGPT comes up with a hallucination (a strange / wrong result) nobody can trace through any code or data and find out why because the code does not exist and the data is both massive and is a synthesized representation involving sometimes trillions of iterations drawn from an even more massive training corpus.

On top of that, it is essentially impossible for programmers to literally program the level of sophistication that you see in this robot and certainly not in something like ChatGPT.   AI (in Computer Science) has been trying to figure out how to do things like this since the 1950s.  AI has been largely stalled for decades making tiny improvements and mostly just doing research.   It was not until sufficient hardware emerged that the notion of machine learning based on neural networks with back-propagation was even feasible.   Once this started delivering results we had a renaissance with AI.   The renaissance occurred because it was now possible for a machine to genuinely learn without being directly programmed.   Once that was possible the human programming barrier was eliminated and one could gain unheard levels of sophistication by letting automatons loose in a controlled environment to learn.

The robot learns the same way an infant learns to walk.   There is no program of if/then statements representing the learning, just a framework that establishes the goals and the mechanism that updates the neural network based on feedback.   That is it.   The programming of robots deals more with the script and higher level behaviors, not learning how to move in various terrains.  Just like the brain of an infant provides it a means to learn, the AI technology provides the automaton the means to learn.   And from there, the automaton's knowledge is primarily a function of the data it experiences and the time spent training.

Rather than direct programming, such as g code for a CnC machine, what they have done is provide programming that, generally speaking, outlines the parameters of the goal, gives the capability of the machine, and code that allows it to try permutations of the possible solutions until the goal is met. The machine did not decide to do any of that. It did not learn to do any of that. It was programmed to do it and could not do anything else. You yourself provided vids that demonstrated my point. 

This is incredible.   You write this yet at the same time claim that this learning is just a bunch of IF statements.   That is, you acknowledge that AI machine learning establishes the framework and conditions for learning and then reject the notion that the learning itself is not a bunch of IF statements.

Bizarre, Drakk.

 
 
 
Drakkonis
Professor Guide
4.2.18  Drakkonis  replied to  TᵢG @4.2.17    2 weeks ago
Bizarre, Drakk.

Really? Take the you provided. What do you think is going on there? What I see is a program that is executing endless permutations of IFs until the solution is found. What do you see? 

 
 
 
TᵢG
Professor Principal
4.2.19  TᵢG  replied to  Drakkonis @4.2.18    2 weeks ago
What I see is a program that is executing endless permutations of IFs until the solution is found. What do you see? 

The part of what you see that is correct is that the learning mechanism (the framework for learning akin to the instincts of an infant) is a program and that it repeats until  it produces results within the error tolerance .

Where you are wrong is to envision the learning itself as being directed by a bunch of IF statements which test a sensor reading (IF) and then make an adjustment (THEN) rather than a mechanism (neural network) literally learning through experience via a complex data structure (neural network).

What I see (and what is actually happening) is a program (a learning framework) which captures sensor readings, feeds those readings into a neural network which then propagates and then produces a set of predictions for adjustments, compares the resulting predicted adjustments to the objective balance sensors, determines the variances and then back propagates the variances into the neural network as an iteration of training.

predictions vs. objective ⇒ adjustment | repeat likely millions of times

It is a feedback loop which adjusts the equivalent of hundreds of millions of little 'dials' (akin to human synapses) on each iteration.   What you see as a bunch of programmed IF statements are actually data items that individually make no sense but in aggregate result in the desired behavior.   Unlike a bunch of hand-coded IF statements, there is no way for someone (even with a program) to understand the critical purpose of each of these data items or to understand why they each have the value that they have.   There is no coded algorithm to trace.   Just like there is no way to look at the brain of a child and figure out why it stumbles when it turns left.

An extreme oversimplification of a neural network is shown below.   (In reality, a neural network is a collection of large matrices each holding real numbers.)  Each line represents a real number weight.   The sensor readings are tied to the 'neurons' on the far left.   The resulting adjustments for balance, etc. are the results of the 'neurons' on the right.   In between are the hidden layers of the network where all the work is done.   The weights determine the influence that each prior connected neuron has on a particular neuron and the impact that particular neuron has on all neurons it feeds to.   The learning itself is a series of matrix operations which progress through the network.   When variances are detected at the end, the variances are proportionally applied to each weight going backwards through the network.   The effect is that the weights are all (proportionally) adjusted (tiny little tweaks and each will make no sense individually) to account for the variances.   After millions of iterations, the resulting weights will collectively transform an arbitrary collection of sensor readings (through the same matrix calculations that have been executed millions of times) into the correct mechanical adjustments the robot must make to retain its balance, etc.

This would be impossible to program given it would literally take hundreds of millions (likely for this robot, trillions for LLMs) of IF statements working together to produce the desired results for every possible combination of sensor readings.   Plus, no team of human beings on the planet have the IQ, memory, and time to program something like that.

800

To wit, the knowledge is NOT acquired by a bunch of IF statements as we tried in the past.   The knowledge was acquired by a feedback loop that slowly but surely evolved to the equivalent of an extremely complex non-linear equation which takes sensor feedback and produces adjustments required to maintain balance (simplifying to just balance).

This is an entirely new paradigm for producing 'intelligent' behavior.   Viewing this as a bunch of IF/THEN statements is wrong and will prevent you from understanding how AI works.

Modern AI systems essentially are various architectures which take this base concept of a neural network and produce many stacks of neural networks with different properties.   And then they wrap it with all sorts of clever algorithms (conventional programming) to provide the illusion of real intelligence.   You will see network terms like transformer, convolutional, recurrent, attention, etc. in the technical papers.   These are all the results of research and are now building blocks for the actual AI products that are emerging.   So, basically, I am noting that my description of AI here is absurdly over-simplified compared to what is happening right now in the industry.    We are quite far beyond conventional programming at this point; this is a new paradigm.

 
 
 
CB
Professor Expert
5  CB    2 weeks ago

Wow. What an unexpectant long set of discussion is going on here about the video. But, since I want to get in on this somewhere. . . I'll just 'jump in and swim or sink)! Yes over time AI is developing through software packagings or 'cells' the data that it will require to 'map' new experiences. For example (remember just 'diving right in), there is a video in the line-up above that shows soldiers pushing (kicking aside) a robot dog. The point seeming to show that it won't tilt over, but account for the force and impact and remain on its feet. Well, clearly the 'dog' is adjusting for the force applied to it. . .and, it may be doing 'something' or several somethings simultaneously: Such as using cameras to map the face of the 'striker' and the amount of force that 'striker' is capable of exerting against it. 

In order to tell something about how it should respond to the 'striker' with sufficient disabling force (if needed).

Also, about AI. Recently, I reset my Alexa setting and found it interesting that it stated two things to me. That it 'learns' familiarities about me from my verbal statement (which it keeps indefinitely (18 months or more) and it volunteered to me (dialog box) that it uses 'All verbal statements from the COMMUNITY OF USERS to learn how to better itself' or words to that effect.

The AI explains that it is developing 'skills' based on the patterns humans 'express' to it. Such that it can effectively mimic them or simulate them back to. . .humans in a myriad of stimulating, intelligent, and emotional simulated ways. 

 
 

Who is online




33 visitors