Wednesday, 30 December 2020

AI Autonomous Cars Might Not Know They Were In A Car Crash 

Even if a fender bender, the AI self-driving car needs the features and functions required to detect whether it has been in a car crash. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

I’d bet that most of us would agree that if you have ever been in a car crash, you know that you were in a car crash.   

This seems perhaps absurdly obvious, but do not let your gut instincts lead you astray. In theory, you could have a fender bender and perhaps be oblivious to it, though this seems to raise eyebrows as to how distracted a driver you must be to not have felt the impact.   

All in all, it is relatively sensible to assert that people would usually know when they’ve been in a car crash. I’ll clarify in a moment why that’s an important point, so hang onto that thought.   

When referring to car crashes, for convenience’s sake herein, let’s agree that a car crash is widely defined and can encompass a variety of car-related adverse incidents, including outright head-on collisions, minor fender-benders, sideswipes, frightening rollovers, being rear-ended, destructive pile-ups, and even demonstrative dents and pings by striking objects or getting stricken by flying debris, etc.   

I hesitate to admit that I’ve logged several of my own sad tales of car crash woes.   

In one especially scary instance, a car rammed into me from behind, shoving my car into the vehicle ahead of me. I suppose this is an example of a twofer, namely two car crashes for the price of one. On another occasion, a car sideswiped me, while underway in my prized automobile, tearing off completely the side view mirror, along with creating lengthy scratches and dents along the entire side. I’ve had a golf ball skyrocket from out of nowhere (actually, while driving past a golf course), busting up my front windshield.   

Perhaps the most unusual circumstance involved a pickup truck ahead of me on the freeway that opted to absentmindedly let a half-dozen cans full of paint flop off the back of the truck. In this situation and with no time to react and no viable options to veer away, I drove directly over those scattering cans of paint. They bounced up and down at 65 miles per hour under my car, damaging some of the underbody areas of the vehicle. Also, they cracked open and spilled white paint all over the underside and somewhat onto the bottom exteriors of my car.   

Later on, some pals chided me for being upset about the incident and said there was no reason to cry over spilled (paint) milk. Of course, it is easier to laugh about it now, though at the time it decidedly did not seem to be a laughing matter. 

Anyway, we all inevitably seem to have our tales of woe about car crashes. 

Sometimes referred to formally as Motor Vehicle Accidents (MVA) or Motor Vehicle Collisions (MVC), we get into them quite a bit. According to statistics by the National Highway Traffic Safety Administration (NHTSA), there are approximately six million car crash incidents each year in the United States. Think of this as being about 16,000 such incidents per day, taking place somewhere perhaps near to you as you are driving around for your daily commute or heading to a grocery store.   

Some detest calling these “accidents” since the explicit wording of considering something as an accident seems to imply that the collision or crash just somehow magically and mysteriously happened, accidentally so. It is seemingly a roll of the dice or bad karma. This phrasing is criticized as letting people off the hook for their responsibility while driving. An alternative is to refer to these as car incidents, though the problem with that wording is that an incident seems somewhat inconsequential, while at least the word “accident” has a semblance of something more harrowing.   

Whether they are called car accidents or car incidents, automobiles have gradually been improved over the years to make them more formidable for handling various kinds of such clashes or collisions. There are impact-absorbing exterior panels on today’s cars. Special deformable front ends are pretty much standard these days. Bumpers are made to take a beating. And so on.   

How do you know when you’ve been in a car crash or collision? 

Generally, it is fair to say that anyone inside a car that encompasses a substantive car crash is going to likely feel it, hear it, see it, and possibly even smell it.   

Yes, the sense of smell is included in the list, which might at first seem an odd inclusion. In the case of my getting bumped and shoved during my car crash, the blow from behind was so severe that it ruptured the gas tank of my car, instantly filling the vehicle with an overpowering and terrifying odor of gasoline. That still stands out in my mind about the incident, thus, the smell associated with a car crash can certainly count too.   

It would seem though that “feeling” the incident is probably the more frequent sensation. We usually feel the car getting smashed or pushed, including that our bodies and limbs go flying back-and-forth upon the physics forces tugging at us. It could be a rocking sensation or a flinging motion that happens without our being able to directly control or stop it. Fortunately, seat belts are usually able to prevent our bodies from becoming unguided missiles within the vehicle. Sadly, some riders do not use their seatbelts and get flung fully out of the car and into added danger.   

You probably hear crash-related sounds too. There is the crunch of metal upon metal. Perhaps the glass of the car windows crackles as it shatters. And so on. Things tend to happen so fast that the sounds do not necessarily register at the moment of impact, instead, you tend to remember and make sense of the sounds after the impact has occurred. 

And then there is the eyeing of what happens. You can see yourself or others as they get shoved, and watch as the car crumples or breaks in various spots. Again, this oftentimes occurs so quickly that we seemingly do not register what we see as the incident plays out, and instead have a kind of mind’s eye afterward that recalls the incident.   

As a driver, you likely saw that a car incident was imminent, sometimes tensing up and freezing at the wheel, while other times making rapid driving maneuvers to try and avert the crash. Passengers in your car might not have any forewarning as they were watching a video on their smartphone or idly looking out a side window and not watching the roadway traffic, getting caught utterly off-guard. 

A car crash can end-up wrecking a car entirely or can be much less brutal and just bash the car but not destroy it. When my car windshield got hit by an errant golf ball, the vehicle itself was generally untouched, other than the windshield splintering and making things harder to see as a driver. In the incident of getting the rear-end hit and the consequent ram into the car ahead of me, the car was essentially totaled. 

The resulting status of the car involved in a car crash can be one of these three major types: 

  • Inoperable 
  • Operable, but not safely so 
  • Operable   

You could somewhat classify the car driver in the same manner. Someone driving a car that gets into a car crash could become “inoperable” due to being harmed in the car incident, or they might be operable but are no longer considered a safe driver per se (perhaps due to the initial shock of the crash), or might be just fine and able to proceed to drive normally.   

Now that we’ve laid out the core facets covering the onset of a car crash, let’s ponder a noteworthy viewpoint about the future of cars. 

Here’s today’s intriguing question: Will an AI-based true self-driving car be able to discern that it has been in a car crash, and what does this foretell about the advent of self-driving cars?   

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. 

  

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Car Crashes 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.  

Realize that the AI is not a robot sitting in the driver’s seat of the car. Though there are various research efforts underway to create a walking-talking and driving robot, though that is generally not how we will witness the emergence of self-driving cars. The AI system is under-the-hood, as it were, running on computer processors that might be hidden inside the body of the vehicle or placed in the trunk, etc.   

In short, you will not see the driver of the self-driving car. 

The reason this is worthwhile to point out is that unlike a human driver that can “feel” a car crash, there isn’t a robotic body that sits in the vehicle and equally ascertains the sensations that arise when a crash occurs (though, via the IMU and the potential addition of other special tactile related sensors, this might be possible to detect).   

Nor smells of the car crash (well, just a heads-up, some are adding e-nose features into their self-driving cars, see my coverage on the topic). 

We’ll get in a moment to seeing or hearing the crash. 

In theory, a self-driving car could get sideswiped by another car, for example, and the AI might be oblivious that this kind of car collision has even occurred. 

Upping the ante, it is conceivable that a self-driving car might strike a small animal head-on, and not have detected the collision. There is a chance, too, of rolling over something or someone (heaven forbid, say a child laying on the street), and not detecting this has happened. 

To be clear about this, there are ways for the AI to detect a car crash, but of course, that requires that the AI developers crafting the self-driving car have made sure to include the needed features and functions to do so. Depending upon what sensors are used on the self-driving car, and how the AI has been crafted, there can inexorably be loopholes or blind spots related to being able to discern a car crash incident that involved the self-driving car.   

As they say, your mileage may vary, meaning that the nature of the self-driving car, the AI developed for it, the sensors used, and other such factors all determine how well or how poorly a particular brand or make of a self-driving car can do at ascertaining that a car crash has taken place. 

Some self-driving cars are being devised to extensively figure this out, while others are less rigorously pursuing this angle, at this time, and the capability of gauging all possible car crashes is labeled as an edge or corner case. Generally, the automakers and self-driving car tech firms are having to prioritize what they are building, such that the foundational aspects like getting safely from point A to point B are needed sooner than other “extraordinary” or long-tail elements (which are classified as at the edges or far corners of what is needed right away). 

Next, let’s take a closer look at the detection facets.   

An obvious way for the AI to realize that a car crash has occurred would be via the deployment of airbags. Typically, there are sensors at the exterior of the car that are activated when a crash happens. Those sensors then notify the airbags to deploy. Since these are capabilities usually already built into a conventional car, the AI-based self-driving mechanisms can tap into those same systems and utilize those sensors as an indicator of a car crash.   

There is a small chance that an airbag will deploy when it is not supposed to do so, such as when smacking against the curb or perhaps when driving across an onerous pothole. Even if the airbag deployment and the sensors are somewhat incorrect, it nonetheless is sensible for the AI to assume that something has gone awry. All told, as a human passenger, imagine your chagrin if the airbag deployed, and meanwhile, the AI kept driving along without trying to safely come to a stop. It would be quite troubling, for sure. 

Recall that earlier I had mentioned that a human driver will frequently spot an impending car crash. 

Presumably, the AI can do likewise. The video cameras that are streaming the street scene are being examined by the AI driving system to ascertain what is taking place outside the vehicle. Based on the interpretation of detected objects, a virtual world model is usually being kept up-to-date in real-time. Overall, the AI would be calculating the chances of a car crash. Not only is visual data being used, but by-and-large self-driving cars also use additional sensors such as radar, LIDAR, ultrasonic, thermal imaging, etc.   

Some pundits insist a self-driving car will never get into a car crash.   

This is hogwash. 

Undeniably, self-driving cars will get into various kinds of car crashes and collisions. It will happen. The good news is that hopefully the odds of those occurrences will decrease, significantly, and we’ll end-up reducing substantively the 40,000 car crash fatalities nationally each year and the 2.5 million related injuries. Despite outstretched claims of reaching zero fatalities, there is a zero chance of that happening and instead, the number of incidents will assuredly be non-zero. See my columns for more background about why this will be a non-zero matter.   

Via the array of sensors and the programming of the AI, encompassing Machine Learning (ML) and Deep Learning (DL), many car crashes might indeed be detected. Furthermore, if the vehicle has been damaged such that it can no longer steer properly, for example, this is something that the AI is programmed to try and discern.   

One somewhat vexing question is whether the AI should try to warn passengers when a suspected car crash is about to occur. Via the Natural Language Processing (NLP) capability, the AI could tell the occupants to brace themselves. This might be useful and prepare the passengers for a bruising, or it might unduly get the passengers agitated and cause them to take adverse actions that will cause even greater injury or harm to themselves than if they did not realize a crash was looming.   

Here’s another twist related to the role of passengers. 

Suppose a passenger is watching traffic and believes that a car crash involving their self-driving car is going to occur. It seems likely that a passenger in that setting might scream at the AI and urge the driving system to take evasive action.   

Should the AI take into account what a passenger reports about a potential car crash?   

We would expect a human driver to be listening. At the same time, we would also expect that a human driver would make a judgment about whether the passenger was right or wrong in their assessment. If a toddler in the vehicle makes such a claim, an adult driver might give the remark a lesser weighting than if it was a fellow adult.   

For some AI driving systems, it makes no difference what a passenger says about an impending car crash and there is no provision for receiving or analyzing any such advice offered by a passenger (this is, again, considered an edge or corner case). The viewpoint is that right now, the AI is the driver, the sole driver, and no kind of so-called backseat driving is being sought, encouraged, or included.   

In terms of hearing a car crash, the odds are that self-driving cars will have microphones inside the vehicle, used to aid in undertaking the NLP aspects. Thus, yes, it is feasible that the AI could “hear” during a car crash. Some self-driving cars are adding microphones on the exterior of the car too, allowing for hearing street sounds, perhaps the sound of a police car or ambulance siren, and therefore could play a role in detecting a pending car accident (potentially picking up the sound of screeching tires, that kind of thing).   

Would a self-driving car realize that a golf ball has landed on the windshield and made a large crack? 

Maybe yes, maybe no. 

It is hard to say whether the sensors would have detected the ball while in the air. Once the golf ball struck the windshield, there might not be any sensors that would alarm at this, since the cameras used by the AI are not particularly affected by the visibility associated with a windshield. The vehicle would still be fully drivable and thus no indication would arise from how the vehicle controls were responding to the AI.   

Would a self-driving car realize that the vehicle was sideswiped by another car?   

Assuming that none of the sensors mounted in the side of the car were damaged, this again is a questionable incident in terms of being detected.   

What about the cans of paint that fell off the truck?   

Well, this one brings up some important points. Yes, the odds are that the sensors would have detected the paint cans. It seems a lot less likely that the AI would “sense” the paint cans bouncing off the underbody of the car. The rattling and ping-ponging are maybe detected via the interior microphones, but that’s a stretch for most of today’s AI self-driving systems. And, let’s assume that the car driving controls are not damaged, ergo that’s not a telltale clue either. 

The interesting point here is that if the AI determines that a car crash or collision of some kind was likely imminent, and yet if there does not seem to be any demonstrative result or outcome, what should the AI do?   

You could argue that the AI can just continue driving along.   

If the passenger speaks up and complains or raises an issue, perhaps the AI would then come to a safe stop, or the passenger might use an in-car OnStar-like feature to make contact with the fleet operator to seek assistance.   

On the other hand, you could also assert that if a car crash or incident was expected, and yet no apparent damage detected, nonetheless the AI ought to come to a safe stop, doing so out of an abundance of caution.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion   

A human driver would seemingly know when their car has been involved in a car crash or collision. Also, they can stop the car, get out, walk around the vehicle, and upon inspection ascertain what if any damage might have been encountered. In some cases, you don’t even need to get out of the car and can lean over to take a look or otherwise do the inspection while still inside the vehicle.   

Currently, few self-driving cars are being outfitted with cameras that look at the self-driving car itself. In other words, the cameras are aimed outward, rightfully so, seeking to detect what is around or coming at the vehicle. 

Do we need cameras that can look at the self-driving car, doing so to aid in ascertaining and assessing the status of the vehicle? 

One approach being considered is to use a drone that is associated with a self-driving car. The drone might be used for a variety of purposes, such as bringing an item to the car, while the vehicle is in transit, or taking something to someone else. This drone capability could also be used to get a bird’s eye view of the self-driving car.   

All told, I am abundantly hoping that I will not end-up with any additional stories about being in a car crash, regardless of whether I’m doing the driving, or one day while riding routinely inside ubiquitous self-driving cars.   

Copyright 2020 Dr. Lance Eliot. This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

Source

The post AI Autonomous Cars Might Not Know They Were In A Car Crash  appeared first on abangtech.



source https://abangtech.com/ai-autonomous-cars-might-not-know-they-were-in-a-car-crash/

No comments:

Post a Comment