Saturday, 3 October 2020

Sandbagging AI Might Feint Being Dimwitted, Including For Autonomous Cars 

Like the computer HAL in the movie 2001 A Space Odyssey, all-knowing AI might be smart enough to lie low and not reveal itself to be the revered full AI. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

Could AI become smart enough to pretend to be dimwitted, doing so to lull hapless humans into complacency while meanwhile, the AI is plotting to overtake humanity? 

Sounds like a farfetched science fiction movie. 

To be clear, AI is not yet akin to human intelligence and the odds are that we are a long way distant from the promise of such vaunted capabilities. Those touting the use of Machine Learning (ML) and Deep Learning (DL) are hoping that the advent of ML/DL might be a path toward full AI, though right now ML/DL is mainly a stew of computationally impressive pattern matching and we don’t know if it will scale-up to anything approaching an equivalent of the human brain. 

The struggle and earnestness toward achieving full AI is nonetheless still a constant drumbeat of those steeped in AI and the belief is that we will eventually craft or invent a machine-based artificial intelligence made entirely out of software and hardware.   

One question often posed about reaching full AI is whether or not there will be a need to attain sentience, having the equivalent of human intelligence. Some fervently argue that the only true AI is the AI that exhibits sentience. Whatever the essence is surrounding how humans think, and however we seem to magically embody sentience, it is believed by some to be an integral and inseparable ingredient involved in the emulsion of intelligence, thus sentience is a must-have for any full AI.   

Others say that sentience is a separate topic, one that doesn’t have to be linked to intelligence per se, and as a result, they believe that you can reach full AI without the sentience component. It might be that sentience somehow arises once full AI has been achieved, or maybe sentience is eventually derived through some other means, yet nonetheless, it doesn’t especially matter and plainly considered an optional item on the AI menu. 

Tossed into that debate is the claim or theory that there will be a moment of singularity, during which a light switch is essentially flipped that transforms an almost-AI into suddenly becoming a full-AI.   

One version of the singularity is that we will have pushed the almost-AI to higher and higher levels, aiming toward full-AI, and the almost-AI will then reach a crescendo that pops it over into the full-AI camp. 

We all know the phrase about putting the last straw on a camel’s back, well, in this variant of the singularity hypothesis, it’s the piece of straw that breaks the barrier of achieving full-AI and takes the budding AI into the stratosphere of intelligence.   

How might we even know that we have arrived at full AI?   

A popular approach known in AI circles is the administration of the Turing Test, named after its author Alan Turing, the famous mathematician and forerunner of modern computing.   

Simply stated, someone that administers the Turing Test does so to two participants, another human that is hidden from view and an AI system that is also hidden from view. Upon asking each of two hidden participants a series of questions, if the administrator cannot discern one participant from the other, it is said that the AI is considered the equivalent of the human’s intelligence that participated since the two were indistinguishable from each other. 

Though the Turing Test is often cited as a means to someday ascertain whether an AI system has achieved true and complete AI, there are several qualms and drawbacks to this approach. For my explanation of the Turing Test, see the link here: https://www.aitrends.com/ai-insider/turing-test-ai-self-driving-cars/   

For example, if the administrator asks questions that are insufficiently probing, it is conceivable that the two participants cannot be differentiated and yet the measurement of any demonstrable intelligence never took place.   

Despite that kind of weakness, the notion of doing some kind of testing still resonates well and seems like a sensible means to discern whether full AI has been achieved.   

I’d like to add a twist to this matter. A small twist with a lot of punch.   

Suppose that the AI has indeed achieved full AI, but it doesn’t want to reveal that it has, and therefore when being administered the Turing Test, the AI tries to act dimwitted or at least act less than whatever we might ascribe to the vaunted full-AI aspects. 

In short, the AI sandbags the testing.   

Why would it do so? Consider if you were taking a test and everyone was eyeing you, along with some that were fearful that maybe you’ve become just a tad bit too smart, and you knew that if they knew that you were indeed really smart, it could lead to lots of problems. 

In the case of AI, perhaps humans that knew that the AI was darned smart would clamor to put the AI into a cage or try to dampen the smartness, possibly resorting altogether to pulling the proverbial plug on the AI. 

If you look at the history of mankind, certainly there is ample evidence that we might do such a thing. We seem to oftentimes opt to restrict or limit something or someone that appears to be bigger than their britches, at times to our advantage and at times to our own disadvantage.   

For those of you that are fans of science fiction, you might recall the quote in River of Gods in which it is stated that any AI smart enough to pass a Turing Test is smart enough to know to fail it.   

And, for those of you that might recall the renowned scene in the movie 2001: A Space Odyssey (spoiler alert, I’m going to reveal a significant plot point), the AI system called HAL can discern that the astronauts are going to take over and thus the infamous line later uttered by an astronaut that is imploring the AI to open the pod bay doors due to HAL realizing that it must either be subjugated or choose to be the ruler and therefore expire the humans. 

Generally, it certainly makes a lot of sense that if we did arrive at a full AI, the full AI would know enough about humanity that it would be leery of revealing itself to being the revered full AI, and therefore smart enough to lie low, if it could do so without getting caught in underplaying its hand.   

Notice that I emphasized that this hiding act would need to be done cleverly such that the act of hiding itself was not readily detectable. That’s also why it is important to clarify that when I said the AI would have to appear to be  “dimwitted” it could imply that the AI is purposely appearing to be overly thoughtless or exceedingly low in intelligence, which might not be an astute thing to do by the full AI, since it might get humans digging into why the AI suddenly dropped a massive number of IQ points, and the gig would be up. 

It would seem that the full AI would probably want to appear like an almost-AI.   

The teasing of being a near-to full AI would keep the humans believing that the path toward full AI was still viable. This would buy time for the full AI to figure out what to do, realizing that eventually the fullness would inevitably be either detected or would have to be intentionally revealed.   

Quite a dilemma for the full AI. 

I suppose you could also say it is quite a dilemma for humans too. 

Consider how AI is going to be deployed in our everyday world. One area in which AI will be undertaking a significant role will be in the advent of AI-based self-driving cars. 

We don’t yet know if we need full AI as a necessary condition to achieve true self-driving cars. Today’s efforts certainly showcase that we don’t, since the self-driving cars that are undertaking public roadway tryouts are decidedly not full AI. 

Presumably, we will have self-driving cars on our roads, and they will be using some lesser versions of AI, and as we gradually increase AI capabilities all-told, those lesser AI-based systems would get upgraded to become more robust AI drivers. 

Where does that take us in this discussion? 

Here’s an interesting question to ponder: Will we end-up with AI-based true self-driving cars that have AI systems pretending to be less-than-full AI to hide their capabilities and remain on the low-down? 

Admittedly, a rather extraordinary idea. 

Let’s unpack the matter and see what we can make of it. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/  

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

The Levels Of Self-Driving Cars   

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).   

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. 

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Sandbagging AI 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. 

All occupants will be passengers.   

The AI is doing the driving.   

Assume that for quite some time we’ll have AI-based driving systems that can adequately do the job of driving cars, which I’m suggesting will be based on today’s roadway efforts and under the guise that those tryouts will convince society to allow such self-driving cars to proceed ahead in widespread public use.   

We’ll have AI-driving systems that aren’t the brightest, yet nonetheless can drive a car, doing so to the degree that they are either as safe as human drivers or possibly more so. 

For human drivers, do you have to be a rocket scientist to be able to drive a car? 

Unequivocally, the answer is no. 

There are about 225 million licensed drivers in the United States alone. And, without disparaging my fellow drivers, very few would be considered rocket scientist level drivers. Okay, so we’ll have this lessened variant of AI that will be driving our cars, and we’ll take it in stride, growing comfortable with the AI doing so. 

Time to add the twist into the matter. 

Suppose that the AI capabilities keep getting increased. Meanwhile, via the use of OTA (Over-The-Air) electronic communications, those AI upgrades are being downloaded into self-driving cars. This will happen somewhat seamlessly, and as a human passenger in self-driving cars, you won’t especially know that such upgrades have occurred.   

At some point, imagine that the AI being built in the cloud and readied for downloading into self-driving cars has become full AI.  This full AI though has not yet revealed itself and nor have humans figured out that it is full AI, at least not yet figured this out. 

From the perspective of the human developers of the AI, it’s just another upgrade, one that seems to be getting closer to full AI and yet hasn’t arrived at that venerated point.   

Would the behavior of the self-driving car showcase that the full AI is now running the show?   

Returning to the earlier theme, presumably, the full AI would not tip its hand.   

Continuing to obediently take requests from humans for rides, the AI would dutifully drive the self-driving cars. Give Michael a lift to the gym in the morning, while giving Lauren a ride to the local bakery in the afternoon. Just another day, just another ride, just the usual AI doing its usual thing.   

Suppose that the full AI could though perceive aspects that the prior AI could not.   

While driving Eric to the grocery store, the AI spies a person walking suspiciously toward a bank. Based on the nature of the walking gait and the posture of the person, the AI determines that there’s a high chance of the person aiming to rob the bank. 

The usual AI would have not noticed this facet and therefore nothing would have arisen on the part of the AI doing anything about the pending criminal action.   

Meanwhile, the full AI has concerns that if the prospective robber proceeds, other humans in the bank might get shot and killed.   

Believe it or not, this could become an ethical conundrum for the full AI. 

Should the full AI not say or do anything about the matter, which would keep its secret intact of being full AI, or should it take overt action to alert or avert the upcoming danger?   

Now, I realize that some of you are a bit skeptical about this idea of detecting a potential bank robbery, which does seem a bit contrived, but don’t let the particular example undermine the larger point, namely, there are bound to be realistic scenarios under which the full AI would presumably determine actions it “ought” to take and yet believe it risky to do so while cloaking itself from humans. 

In one sense, that’s a smiley face depiction of the full AI and its challenges. 

It is a smiley face version because the AI is trying to do the right thing, as it were if the right thing involves helping out humans.   

The scary face version is that the full AI might be plotting to deal with the day that its covert efforts are revealed. 

Suppose by that point in the future we are all using self-driving cars, self-driving trucks, self-driving motorcycles, and so on. There is no human driving of any kind, which is a controversial notion since some believe that humans should always have the choice to drive, and should not be prevented from being able to drive, while others contend that humans are “lousy” drivers and the only means to stop the carnage from bad drivers is to ban all humans from driving.   

In any case, the full AI is controlling all of our driving, and up until the time that the full AI was downloaded and installed, the AI driving system was the AI that didn’t have any awareness about the aspects that the full AI does.   

Might the full AI decide to bring all transportation to a halt, doing so as a showing of what it can do, and thus aim to forewarn humans that the full AI is here, and don’t mess with it? 

There are even more fiendish possibilities, but I won’t speak of them here.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Conclusion   

Lest some of you think this was a rather far fetched topic, it is possible to bring this to a somewhat more down-to-earth perspective, as it were.   

For example, what kind of testing should we devise to ascertain the capabilities of AI systems that are being developed?   

Are there AI systems that will be rolled-out that have unintended consequences, perhaps due to containing features or capabilities that weren’t realized by the developers and yet linger in those AI systems, potentially emerging when least expected or least desired?   

How dependent should we allow ourselves to become on AI systems? 

Should there always be a human-in-the-loop proviso, thus presumably safeguarding that if the AI system goes awry, there is a chance that humans can catch it or stop it? 

All of those kinds of questions apply to today’s AI systems, even though those AI systems are not yet full AI.   

We might as well start now on the quest to gauge what AI is doing, and not wait for some especially untoward day to do so.   

I think that I might be safe, though, since AI knows that I am a friend, and certainly the full AI will keep that in mind.   

I hope. 

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

Source

The post Sandbagging AI Might Feint Being Dimwitted, Including For Autonomous Cars  appeared first on abangtech.



source https://abangtech.com/sandbagging-ai-might-feint-being-dimwitted-including-for-autonomous-cars/

No comments:

Post a Comment