╌>

What (really) is Artificial Intelligence?

  
By:  TᵢG  •  6 years ago  •  17 comments


What (really) is Artificial Intelligence?
Imagine what might emerge if one enables a suitably empowered AI system to learn on its own. 

Leave a comment to auto-join group Critical Thinkers

Critical Thinkers


Artificial Intelligence has been a goal of Computer Science since the 50's.   For decades AI tried to understand ways to get algorithms (computer programs) to make decisions that would seem intelligent.   There never was an expectation of trying to achieve human intelligence, but rather to offer something that seems intelligent in a limited domain.    For example, an algorithm that would be able to maintain a primitive 'conversation' by picking out patterns in typed sentences and finding an appropriate template response.   Something like:

  • AI:  "What is your name?"
  • M:  "Mary"
  • AI:  "Hi, Mary.   What do you do for a living?"
  • M:  "I am a history teacher"
  • AI:  "What do you like most as a history teacher?"

Convincing interplay like this was complicated to develop (originally) but is ultimately nothing more than a word game based on basic patterns of discourse.   It was quite easy to trip up the AI.    

Searching


AI did not achieve any real recognition by the public until much later in 1997 when IBM put forth an algorithm known as Deep Blue whose specialty was the game of Chess.   Deep Blue publicly defeated reigning world Chess champion Gary Kasparov in regulation play.   This was an amazing feat since it illustrated that an algorithm could indeed surpass the human mind in specific areas of complex problem solving.   AI was no longer a toy - this technology is now seen to have practical potential.

Although impressive, Deep Blue was a very special purpose search engine.   It won by cleverly exploiting raw processing power.   Basically, Deep Blue could not intelligently play Chess - it simply had the ability to quickly (this is the hardest part) play out many variations of moves from the current board position and evaluate the best outcome.   Its move would be the one that correlates with the best outcome.   It would repeat this for every change in the board position.   As such, it (for the most part) looked at the current board position, calculated the best possible future board position (as far as it could go) and moved accordingly.   Kasparov played based on an advanced understanding of chess patterns at many levels of abstraction while Deep Blue primarily just explored possibilities.

Knowledge


But AI is more than being able to look ahead and evaluate chess board positions.   In addition to computational strength, AI needs to engage and understand the natural world.   For example, it needs to be able to understand natural languages in various mediums (written, oral and of course digital).    By the 1990s, research in AI developed a good understanding of the process of sound waves → phonemes → words → sentences (grammar) → semantics (an understood concept) and the reverse.   It was possible for an artificial intelligence to communicate with spoken English.   AI research had also produced an impressive formal understanding of natural language and the means to extract semantics from syntax and context.   This coupled with advanced methods of knowledge representation offered the possibility to create an AI that could actually understand English, process what it understood based on a substantial base of knowledge and deliver an intelligent response in English.   IBM's Watson is the distinguished example of this:

In 2011, IBM's Watson successfully beat two of the best Jeopardy winners in a highly publicized game.   Watson was, at the time, an AI application focused on playing Jeopardy.   Although this is still an extremely limited domain (compared to human abilities) the challenges were profound.   Not only did Watson have to compete with oral language (had to effectively translate sound waves into meaning and then answers into English) but it also had to deal with world knowledge.   In contrast to Deep Blue which only needed to understand Chess moves, Watson had to answer general questions about entertainment, news, history, etc.    There is no way software engineers could possibly load up Watson with all this information.   What was required was to give Watson the ability to learn on its own.   Prior to the match, Watson 'read' all sorts of information and built its own base of knowledge that it would use during the match.   Watson, in effect, learned enough to compete with adult human minds who were masters in Jeopardy.   

But simply having knowledge will not yield a win in Jeopardy.   One must find the correct answer.   Parsing the clue and finding the answer is an enormously complex process that pushed the envelope of AI research.    Watson's underlying engine is a sophisticated mechanism to break a problem into its components, engage massive parallelism to identify and weigh potential answers and ultimately select the best response.  Unlike Chess where the engine evaluates board positions using a very limited 64 square board and 6 types of pieces, Jeopardy involves English semantics with all the ambiguity of the vernacular.   The Jeopardy challenge is to find a highly nuanced needle in a haystack of world knowledge:

Machine Learning


The cognitive capabilities of Watson (answering an arbitrary question based on a massive base of self-learned knowledge) were very impressive.   This is the real breakthrough.   Watson also made it appear as though it could understand and speak English, but under the covers it was still rather brute force.   While human beings easily learn their native language by example and repetition, the complexity of a language such as English is daunting.   Trying to literally understand the complex relationships at a human level seemed like an impossible task.   Although much progress was made, the classical methods of AI research never quite cracked the nut.

In this 2000s an approach known as machine learning emerged as a dominant paradigm of AI.   This clashed with the existing paradigms for AI in which researchers attempted to deeply understand language, knowledge, semantics, etc.   The idea was to first understand the components of what we call intelligence and then build algorithms to emulate same.   With machine learning the focus changed to algorithms which enabled the AI to learn on its own.   This means what the AI learned was not pre-programmed.   Rather, the programming is akin to wiring up a functional brain and then feeding it information.

One obvious application of machine learning is the ability to categorize images.   Images recognition is an enormously complex problem since an image, to a computer, is simply millions of pixels.   To make sense of the pixels the AI must group pixels into patterns which in turn produce patterns, etc.   The main mechanism for teaching this recognition is to provide the AI a category (e.g. 'we are now going to look at pictures of cars') and then feed it thousands (if not millions) of images of cars.   This is called supervised learning.   The AI learns to identify the characteristics that determine the pattern we call 'car' based upon examples.

This also works for natural language.   Virtually every natural language tool we routinely use nowadays was developed by supervised learning.   AI mechanisms can now routinely translate among natural languages and, of course, communicate with us in practical, effective ways.    Machine learning is the new AI and will evolve at a very rapid pace.   Unlike the old AI research approach of requiring human mastery of a problem space before the AI could be programmed to understand, machine learning enables the power of modern computing hardware to dramatically speed up the learning process.    A significant side-effect of this, however, is that the developers of the AI do not necessarily know how their AI makes its decisions.

Learning On Its Own


Supervised learning feeds the AI with many examples of a particular category.   Unsupervised learning, in contrast, is when the AI learns on its own without human intervention.   How is this even possible?    Consider a chess playing AI.   Typically these AI chess programs have pre-programmed opening moves that have long since been analyzed and graded by centuries of chess masters.   They also rely upon heuristics that also have been developed by chess masters such as controlling the center, giving pieces maximum flexibility, skewering, pinning, etc.   Plenty of human-discovered knowledge serves as the base.   This base is supplemented by the chess program's ability to look ahead and pick the best move.    But what if none of that is supplied?    What if a chess AI is taught the rules of chess and then basically plays against itself to learn the best way to play the game?   This is called unsupervised learning.   The rules of the domain are established and the mechanisms for knowledge acquisition and application are wired into the AI, but all learning is done without human intervention.

Imagine what might emerge if one enables a suitably empowered AI system to learn on its own.   Have an AI play Chess 24x7 for a few months?    Deep Blue's raw power approach is good enough for chess, but there is an even more complex game - Go.   Go cannot rely so heavily on looking ahead at future position because the number of possibilities dramatically exceeds computing power (10 80 possible configurations).   To beat a human master at Go an entirely new approach is required - the application of machine learning:

The version of AlphaGo that proved better than the best human master of the game still had quite a few elements of human scaffolding.   That is, it relied upon human heuristics.   How well might AlphaGo play if it developed its own understanding of the game from scratch?    A pure example of unsupervised learning is AlphaGo Zero.    AlphaGo Zero is undefeated - beating even the prior version of itself (the one that beat the best human player):

Games and Reality


The application of AI to games is of course interesting but what about practical applications?

Here the possibilities seem endless.   We all can obviously see how AI is currently used to detect consumer buying trends.   This is relatively easy stuff.   But imagine how this is helping with scientific research.   Given we have sequenced the human genome we are at the beginning of unlocking many of the mysteries of how we work.   We will continue to improve our ability to analyze the relationships between our DNA and medical conditions.   Unlocking the genetic patterns that determine forms of cancer is but one of the almost certain breakthroughs to come.

Certainly the continued advancement of AI without suitable controls could yield the nightmare scenarios of science fiction.   This is an extraordinarily powerful capability that we are developing with amazing promise and an equally potent threat.   But for now, true AI is just beginning to emerge.   We are seeing the very beginning of a society that will no doubt achieve amazing feats because it has found a way to leverage the creativity of the human mind to, in specific ways, go vastly beyond the intellectual limitations of our biochemical brains.


Tags

jrGroupDiscuss - desc
[]
 
TᵢG
Professor Principal
1  author  TᵢG    6 years ago

AI is very complicated.   The idea of this article is to offer an initial perspective on what AI really is and suggest how it likely will evolve along with society.   That is, without getting into the details of the underlying computer science.

 
 
 
TᵢG
Professor Principal
2  author  TᵢG    6 years ago

We are decades away from building an AI that has human general intelligence.   But we already have produced AI systems in special areas that far exceed human intellectual capabilities.   Right now our AI accomplishments are net good and pretty safe.   The trick is to evolve the control measures commensurate with the emerging capabilities.

 
 
 
Perrie Halpern R.A.
Professor Expert
2.1  Perrie Halpern R.A.  replied to  TᵢG @2    6 years ago

You are not kidding about the possibilities of building our own monster. Only after we used the atomic bomb did Oppenheimer say: I am become death, the destroyer of worlds. Humans in the need to meet challenges do not always see the negative outcomes. And while I am personally amazed at AI, I also am very worried about building something that could become our masters. 

 
 
 
Gordy327
Professor Guide
2.1.1  Gordy327  replied to  Perrie Halpern R.A. @2.1    6 years ago
And while I am personally amazed at AI, I also am very worried about building something that could become our masters. 

If movies like the Terminator and the Matrix has taught us anything about AI's, it is to remember to always install an "off' switch on these things. Trying to "pull the plug" simply doesn't work and causes an AI to enslave or destroy all of humanity. jrSmiley_7_smiley_image.png

 
 
 
Perrie Halpern R.A.
Professor Expert
2.1.2  Perrie Halpern R.A.  replied to  Gordy327 @2.1.1    6 years ago

Gordy,

You should watch ( that is if you haven't), Prometheus and Alien: Covenant. You can see the real evil to AI. At some point, all AI has an epiphany and it never turns out well for the creators.  

 
 
 
Gordy327
Professor Guide
2.1.3  Gordy327  replied to  Perrie Halpern R.A. @2.1.2    6 years ago
You should watch ( that is if you haven't), Prometheus and Alien: Covenant.

I actually haven't seen them. 

At some point, all AI has an epiphany and it never turns out well for the creators. 

Perhaps the most well known example is Skynet from Terminator.

 
 
 
DRHunk
Freshman Silent
3  DRHunk    6 years ago

I watched the Alpha Go documentary on Netflix and yes the program learned to play go by playing itself but it also used complicated algorithms that took % chance of victory for each move and then multiplied the chance by the next move sometimes several dozen moves into the future, Cant say it really learned anything it just had a really good probability calculator.

 
 
 
TᵢG
Professor Principal
3.1  author  TᵢG  replied to  DRHunk @3    6 years ago
Cant say it really learned anything it just had a really good probability calculator.

AlphaGo learned from its mistakes (and its successes).   It incorporated results via a feedback loop and improved its game play ability.    How is that not learning?

Also, AlphaGo Zero (the most recent version) learned Go from scratch.  It was not given any heuristics, tactics, etc.   It was given the rules of Go and it proceeded to learn by playing itself (and learning via feedback updates to its neural network).    Alpha Go Zero is undefeated.   It beat its earlier version (which did have human inserted heuristics).   The earlier version is the one that beat the best Go player at the time:  Lee Sedol.

How is this not learning?

 
 
 
Dig
Professor Participates
4  Dig    6 years ago
Certainly the continued advancement of AI without suitable controls could yield the nightmare scenarios of science fiction.

Sometimes I wonder if we haven't anthropomorphized quite a bit of undue evil into those nightmare science fiction scenarios. We tend to imagine a possibly diabolical AI with some kind of ingrained desire to destroy and conquer, but where is that desire supposed to some from? We know humans can be evil, selfish, greedy, exploitative, power hungry and violent, but the potential for all of that came from biological evolution. We have instinctual and emotional desires for things like food, sex, comfort, and security. Biology has wired our brains to take advantage of nature (and sometimes others of our own kind) in order to reproduce as successfully as possible, but AI would lack the long evolutionary history that made us that way.

I'm not sure AI would have any of the same motivational factors that we do at all. Not without someone intentionally adding them to the initial programming, that is. What would make it want to do anything? Biology drives humans at the most basic level. Get up, move, get food, avoid danger, fight when threatened, and of course reproduce... and while we're at it, solve problems and build things to help us get food, avoid danger, fight when threatened and reproduce more easily and successfully. But AI wouldn't have the biological underpinnings of 'humanness', so why would it even get out of bed in the morning (so to speak)? Why would it do anything? It wouldn't have evolved any of the natural, survival-oriented behavior that living organisms have, so where are its most basic motivations supposed to come from, let alone the sci-fi-esque conquer humanity and take over the world stuff?

I guess what I'm asking is where would its emotional desires for 'life' come from? Without them, AI could process information and learn (from whatever input or sensory ability it is built with), but it wouldn't be a being with drive and motivation for actions of any kind, good or bad, would it? It would be devoid of what we might call personhood, right? No psychology. No personality. It would just kind of sit there like a lump.

And if we give it personality and desires to simulate personhood or sentient individuality, then is it really AI? Wouldn't it be just another machine doing exactly what we made it to do?

By the way, great article TiG. Quite a thought provoker.

 
 
 
TᵢG
Professor Principal
4.1  author  TᵢG  replied to  Dig @4    6 years ago
Not without someone intentionally adding them to the initial programming, that is.

Here is the key.    For the history of computing everything is run from an initial algorithm devised by human beings.   This algorithm imposes the rules and indeed the motivations of the AI.   But we just crossed an important threshold.   We have devised a way for AI to actually learn.   This in itself does not mean the AI is going to be out of control, but it is a level of sophistication that has never existed until recently.

The next major level of sophistication IMO would be AI that not only learn by accumulating information but also by expanding its mind.   This would be an AI that modifies its own algorithms - its own programming.   (Programming is just data too.)   Once an AI has that level of access (ability to make changes at the meta level) we are in a very new paradigm.   Without controls it is entirely unpredictable what the AI might evolve into.

By the way, great article TiG. Quite a thought provoker.

Thanks.   That was the intent.  This is a very cool emerging technology.   I wish more people were interested in this kind of stuff.  (So sick of the same old partisan crap.)


Actually the focus of the article is really more on this:

But imagine how this is helping with scientific research.   Given we have sequenced the human genome we are at the beginning of unlocking many of the mysteries of how we work.   We will continue to improve our ability to analyze the relationships between our DNA and medical conditions.   Unlocking the genetic patterns that determine forms of cancer is but one of the almost certain breakthroughs to come.

The scary side of AI is a bit down the road.   Right now it is going to be very heavily on the benefit side of the equation.

 
 
 
Dig
Professor Participates
4.1.1  Dig  replied to  TᵢG @4.1    6 years ago
The next major level of sophistication IMO would be AI that not only learn by accumulating information but also by expanding its mind.   This would be an AI that modifies its own algorithms - its own programming.

There's potential for evolution there. If a modification is negative or unsuccessful in some way it can be tossed, but the positive or helpful ones can be retained and accumulated over time.

The scary side of AI is a bit down the road.   Right now it is going to be very heavily on the benefit side of the equation.

What do you want to bet that the military ends up funding and developing most of the near term advances? It's pretty easy to imagine the applications. If AI can be employed to process motor controls well enough to give a 'soldier bot' the mechanical agility and dexterity required to navigate a battlefield, and at the same time process video input (visual?) well enough tell friend from foe as good as a human (likely better), or to clear a building and be able to tell the bad guys from non-combatant civilians, then they'll jump on it and probably throw billions at it.

Or what about crewless submarines? A nuclear powered one could literally patrol uninterrupted for years.

Or combat aircraft? Take the G-force-sensitive meat bag out of the cockpit and you'd have a whole new ballgame in the air.

The military is going to love AI.

 
 
 
TᵢG
Professor Principal
4.1.2  author  TᵢG  replied to  Dig @4.1.1    6 years ago
The military is going to love AI.

No doubt.   And this will be good or bad depending on how the AI is used.   That is, the AI is not intrinsically going to make this bad since it will definitely NOT be at a level where it is evolving itself (yet).   One clear positive is the lessened risk on US soldiers.   One clear negative is that this will make it (politically) far easier for a PotUS to engage enemies.    

But any major change will enable good and bad consequences.   Nothing that can be done about it.   

Note also that the USA necessarily must exploit AI (along with other technical enhancements) to match other nations (whose own AI initiatives might not have our best interests in mind).   

 
 
 
Perrie Halpern R.A.
Professor Expert
4.1.3  Perrie Halpern R.A.  replied to  TᵢG @4.1.2    6 years ago

AI is only as safe as the engineers who design it. The human race has not shown itself to be a good example. 

 
 
 
TᵢG
Professor Principal
4.1.4  author  TᵢG  replied to  Perrie Halpern R.A. @4.1.3    6 years ago

True but it goes far beyond the engineer.   Even the best designed technology without proper usage controls can do harm.   And, of course, technology can be purposely abused to do harm.   Societal controls is the key factor.

 
 
 
JohnRussell
Professor Principal
5  JohnRussell    6 years ago

There will be a machine that will "learn" to feel and replicate human emotions. 

Then someone will say that machines have surpassed biologically based intelligence. And in order to create a more perfect existence the machines should make all the decisions and have all the power. 

The future such envisaged is a dystopia. 

I believe that there is such a thing as having too much faith in science. 

I think that it is far more likely that AI will prove harmful to the human race than that it won't. 

 
 
 
TᵢG
Professor Principal
5.1  author  TᵢG  replied to  JohnRussell @5    6 years ago
I think that it is far more likely that AI will prove harmful to the human race than that it won't. 

A justified concern.

I believe that there is such a thing as having too much faith in science. 

You mean faith in technology, right?   Science is the pursuit of knowledge based on empirical observations.   Scientific theories are falsifiable, formally defined, supported by scrutinized, formal evidence and perpetually challenged by other ambitious scientists.   Faith makes no sense when speaking of scientific findings wherein the facts underlying an explanation / finding are there for inspection and able to be challenged by formal experiment.   You either accept the findings of science or reject them - all based on facts and logic.

Technology, however, is the application of science.   One can have faith (as in trust) that technology will be net good but your point, I presume, is that technology can be misused by human beings and -far worse- human beings can lose control over advanced technology.    In that regard, I concur.

 
 
 
Split Personality
Professor Guide
6  Split Personality    6 years ago

I know where to find a ton of artificial intelligence but this is probably the wrong forum...

 
 

Who is online



339 visitors