Saturday, 29 August 2020

Knowing The Difference Between Strong AI and Weak AI Is Useful And Applies To AI Autonomous Cars 

We are a long, long, long, long way from crafting AI systems that can exhibit human-level intelligence in any genuine meaning of the range, scope, and depth of human intelligence. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

Strong versus weak AI. Or, if you prefer, you can state it as weak versus strong AI (it’s Okay to be listed in either order, yet still has the same spice, as it were). If you’ve read much about AI in the popular press, the odds are that you’ve seen references to so-called strong AI and so-called weak AI, and yet both of those phrases are used wrongly and offer misleading and confounding impressions. 

Time to set the record straight. 

First, let’s consider what is being incorrectly stated. Some speak of weak AI as though it is AI that is wimpy and not up to the same capabilities as strong AI, including that weak AI is decidedly slower, or much less optimized, or otherwise inevitably and unarguably feebler in its AI capacities. 

No, that’s not it. 

Another form of distortion is to use “narrow” AI, which generally refers to AI that will only work in a narrowly-defined domain such as in a specific medical use or in a particular financial analysis use, and equate it with weak AI, while presumably strong AI is broader and more all-encompassing. 

No, that’s not it either. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Meaning Of Strong AI And Weak AI   

Hark back to an earlier era of AI, around the late 1970s and early 1980s, a period of time that was characterized as the first era of AI flourishing, which you might know as a time when Knowledge-Based Systems (KBS) and Expert Systems (ES) were popular. 

The latest era, today, which some consider the second era of AI flourishing, seems to have become known as the time of Machine Learning (ML) and Deep Learning (DL). 

Using a season-oriented metaphor, the current era is depicted as the AI Spring, while the period between the first era and this now existent second era has been called the AI Winter (doing so to suggest that things were either dormant or slowed-down like how a winter season can clamp down via snow and other dampening weather conditions). 

The first era consisted of quite a bit of hand wringing about whether AI was going to become sentient and if so, how would we get there. 

Even during this second era, there are still similar discussions and debates taking place now, though the first era really seemed to fully take the matter in-hand and slews of philosophers joined onto the AI bandwagon as to what the future might hold and how AI could be or might not become truly intelligent.   

Into that fray came the birth of the monikers of weak AI and strong AI. 

Most would agree that the verbiage originated or at least was solidified in a paper by philosopher John Searle entitled “Minds, Brains, And Programs” (see link:  http://cogprints.org/7150/1/10.1.1.83.5248.pdf). 

What was the weak AI and what was the strong AI? 

They are philosophical differences about how AI might ultimately be achieved, assuming that you agree as to what it means to achieve AI (more on this in a moment).  

Let’s see what Searle said about defining the terminology of weak AI: “According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion.” 

And, furthermore, he indicated this about strong AI: “But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” 

With this added clarification: “In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.” 

The rest of his famous (now infamous) paper then proceeds to indicate that he has “no objection to the claims of weak AI,” and thus he doesn’t tackle particularly the weak AI side of things, and instead his focus goes mainly toward the portent of strong AI. 

In short, he doesn’t have much faith or belief that strong AI is anything worth writing home about either. He says this: “On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.” 

Here’s what that signifies, at least as has been interpreted by some. 

Conventional AI is presumably doomed in trying to reach true AI if you stick with using “computer programs” since those programs aren’t ever going to cut it, and lack the needed capabilities to embody those things we associate with thinking and sentience. 

Humans and animals have a kind of intentionality, somehow arising from the use of our brains, and for those that believe true AI requires that intentionality, you are barking up the wrong tree via the pursuit of “computer programs” (they are the wrong stuff and can’t go that high up the intelligence ladder). 

All of this presupposes two key assumptions or propositions that Seale lays out: 

  1. “Intentionality in human beings (and animals) is a product of causal features of the brain…” 
  2. “Instantiating a computer program is never by itself a sufficient condition of intentionality.”   

If your goal then is to devise a computer program that can think, you are on a fool’s errand and won’t ever get there, though, it isn’t completely foolish because you might well learn a lot along the way and could have some cool results and insights, but it isn’t going to be a thinker. 

I believe it is self-evident that this is a deeply intriguing philosophical consideration, one worthy of scholars and others pontificating about. 

Does this make a difference for everyday AI work that those making AI-based systems such as Alexa or Siri or robots that function on a manufacturing line are going to be worrying about and losing sleep over?   

No.  

To clarify, we are a long, long, long, long way from crafting AI systems that can exhibit human-level intelligence in any genuine meaning of the range, scope, and depth of human intelligence.  

That’s a shocker to some that keep hearing about AI systems that are as adept as humans. 

Take a slow and measured breath and keep reading herein. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Achieving True AI Is The Hearty Question 

I had earlier mentioned narrow AI. 

Some AI applications do seemingly well in narrow domains, though maybe they should have a Surgeon General type small print that identifies the numerous caveats and limitations about what that AI can do.   

AI systems today cannot undertake or showcase common-sense reasoning, which I believe we all agree that humans generally have (for those snickering about humans having or not having common-sense reasoning, yes, there are people that we know that seems to at times lack common-sense, but that’s not the same as what overall is considered common-sense reasoning and don’t conflate those two into meaninglessness).   

To insiders of AI, today’s AI applications are narrow AI, and not yet AGI (Artificial General Intelligence) systems, which is yet another term that is being used to get around the fact that “AI” has been watered down as terminology and used for anything that people want to say is AI, meanwhile, others are striving mightily to get to the purists’ version of AI, which would be AGI. 

The debate about weak AI and strong AI is aimed at those that wonder whether we will be able to someday achieve true AI. 

True AI is a loaded term that needs some clarification. 

One version of true AI is an AI system that can pass the Turing Test, a simple yet telling kind of test that involves asking an AI system questions and asking a human being questions. They are essentially two distinct players in a game of wielding intelligence, of sorts, and if you cannot tell which is which, presumably the AI is the “equivalent” of human intelligence since it was indistinguishable from a human exhibiting intelligence. 

Though the Turing Test is handy, and a frequently invoked tool for judging AI’s efforts to become true AI, it does have its downsides and problematic considerations (see my analysis at: https://www.aitrends.com/ai-insider/turing-test-ai-self-driving-cars/). 

Anyway, how can we craft AI to succeed at the Turing Test, and have AI be ostensibly indistinguishable from human intelligence? 

One belief is that we’ll need to embody into the AI system the same kind of intentionality, casualty, thinking, and essence of sentience that exists in humans (and to some extent, in animals). 

As a side note, the day that we reach AI sentience is often referred to as the singularity, and some believe that it will inevitably be reached and we’ll have then the equivalent of human intelligence, whilst others believe that the AI will exceed human intelligence, and we will arrive at a form of AI super-intelligence.   

Keep in mind that not everyone agrees with the precondition of needing to discover and re-invent artificial intentionality, asserting that we can nonetheless arrive at AI that exhibits human intelligence yet do so without tossing into the cart this squishy stuff referred to as intentionality and its variants. 

Anyway, setting aside that last aspect, the other big question is whether “computer programs” will be the appropriate tool to get us there (whatever the there might be).   

This brings up another definitional consideration. What do you mean by computer programs? 

At the time when this debate first flourished, computer programs generally meant hand-crafted coding using both conventional and somewhat unconventional programming languages, exemplified by programs such as ELIZA by Weizenbaum and SHRDLU by Winograd. 

Today, we are using Machine Learning and Deep Learning, so the obvious question on the minds of those that are still mulling over weak AI and strong AI would be whether the use of ML/DL constitutes “computer programs” or not. 

Have we progressed past the old-time computer programs and advanced into whatever ML/DL is, such that we no longer seemingly have this albatross around our neck that computer programs aren’t the rocket ship that can get us to this desired moon? 

Well, that opens another can of worms, though it is pretty much the case that most would agree that ML/DL is still a “computer program” in the meaning of even the 1980s expression, so, if you buy into the argument that any use of or a variant of computer programs is insufficient to arrive at thinking AI, we are still in the doom-and-gloom state of affairs. 

Searle though does cover the ML/DL topic to some degree since he mentions that a man-made machine could think if it: 

“Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obvious, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.”  

Please be aware that today’s ML/DL is a far cry from being the same as human neurons and a human brain.   

At best, it is a crude and extremely simplified simulation, usually deploying Artificial Neural Networks (ANNs), way below anything approaching a human biological equivalent. We might someday get closer and indeed some believe we will achieve the equivalent but don’t be holding your breath for now. 

Bringing us home to the argument about weak and strong AI, no matter what you do in either the case of weak AI or strong AI, here’s where you’ll land as per Searle: “But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?”   

And his clear-cut answer is: “This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.” 

Ouch! That smarts.

There is nonetheless a glimmer of hope for strong AI, as it could be potentially turned into something that could achieve the thinking brand of AI (says Searle): “Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain.”   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Practical Significance For Today 

I hope it is obvious that the original meaning associated with weak and strong AI is far afield of what the popular press tends to use those catchy phrases for today. When trying to point out to people that their use of weak AI and strong AI is not aligned with the original meanings, they usually get huffy and tell you to not be such a stickler. Or, they tell you to knock the cobwebs out of your mind and become hipper with the present age. 

Fine, I suppose, you can change up the meaning if you want, just please be aware that it is not the same as the original.

This comes up in numerous applied uses of AI. For example, consider the emergence of AI-based true self-driving cars. True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.  

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out). 

Some media describe the semi-autonomous ADAS as weak AI, while the autonomous AI to be strong AI.  Well, that’s not aligned with the original definitions of weak AI and strong AI. You have to be willing to put to the side the original definitions if you are seeking to use those terms in that manner. 

Personally, I don’t like it. Similarly, I don’t like it when the weak AI and strong AI are used to characterize the difference between autonomous AI. 

For example, some say that Level 4 is weak AI, while Level 5 is strong AI, but this once again is nonsensical in the nature of what those terms were intended to signify. 

If you genuinely want to try and apply the argument to true self-driving cars, there is an ongoing dispute as to whether driverless cars will need to exhibit “intentionality” to be sufficiently safe for our public roadways. 

In other words, can we craft AI without any seeming embodiment of intentionality and yet nonetheless have that AI be good enough to trust AI-based self-driving cars cruising around on our highways, byways, and everyday streets? 

It’s a complex debate and no one yet knows whether the driving domain can be considered limited enough in scope that such intentionality is not a necessity, plus, the question within a question is what might be rated as safe or safe enough for society to accept self-driving cars as fellow drivers.   

Conclusion 

For those of you wanting to get further into the weeds on this topic, you’ll also want to get introduced to the Chinese Room Argument (CRA), a foil used in Searle’s argument and something that has become a storied punching bag in the halls of AI and philosophy. 

That’s a story for another day.   

Practitioners of AI might see this whole discussion about weak AI and strong AI as academic and much ado about nothing. 

Use those phrases whatever way you want, some say. 

Hold your horses. 

Perhaps we ought to heed the words of William Shakespeare: “Words without thoughts never to heaven go.”   

The words we use do matter, and especially in the high stakes aims and outcomes of AI. 

 Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/ and his podcast: http://ai-selfdriving-cars.libsyn.com/website] 

Source

The post Knowing The Difference Between Strong AI and Weak AI Is Useful And Applies To AI Autonomous Cars  appeared first on abangtech.



source https://abangtech.com/knowing-the-difference-between-strong-ai-and-weak-ai-is-useful-and-applies-to-ai-autonomous-cars/

No comments:

Post a Comment