Download!Download Point responsive WP Theme for FREE!

Why self-driving cars aren’t safe yet: rain, roadworks and other obstacles

Driverless technology remains a work in progress as the fatal crash of Tesla Model S tragically showed. Here are some flaws that persist in autopilot technology

Last weeks fatal crash involving a Tesla Model S offers a startling reminder that driverless technology is still a work in progress.

As Teslas own blog post on the tragic loss points out, the autopilot technology that was controlling Joshua Browns car when it ploughed into a truck is in a public beta phase. That means the software has been released into the wild to be stress-tested by members of the public so that bugs can be flushed out. Its the kind of approach we are used to seeing when we gain early access to new email applications or virtual reality headsets. As Apple co-founder Steve Wozniak told the New York Times: Beta products shouldnt have such life-and-death consequences.

Until theres been a full investigation into the tragic incident, we wont know whether it was a software glitch or human error (particularly with reports suggesting the driver may have been watching a Harry Potter DVD) at fault. All we know for now is that neither autopilot nor the driver noticed the white side of the tractor trailer against the brightly lit sky so the brake was not applied.

Teslas autopilot uses both cameras and radar to detect and avoid obstacles. In this case we know there must have been a double failing. The cameras struggled with the glare from the sun, while the radar – according to Musk – tunes out what looks like an overhead road sign to avoid false braking events.

Its not just direct sunlight that messes with the sensors that power self-driving systems. Here are some other challenging obstacles for the technology.

Sensor fusion

When you have multiple sensors giving conflicting information, which one do you defer to? This seemed to be an issue at play in the fatal Tesla crash, where the one sensor that did spot the truck discredited it, assuming it was a road sign over head.

The big question for driverless car makers is: how does the intelligence of the machine know that the radar sensor is the one to believe? Thats the secret sauce, says Sridhar Lakshmanan, a self-driving car specialist and engineering professor at the University of Michigan-Dearborn.

Roadworks

When Delphi sent an autonomous car 3,400 miles across the United States in April 2015, engineers had to take control of the car only for a 50-mile stretch. The reason? Unpredictable urban conditions with unmarked lanes and heavy roadworks. So your average city commute then.

Sandbags (and assumptions)

One of Googles self-driving cars collided with a public bus in Mountain View in February 2016 as it tried to get around some sandbags on the street. In attempting to navigate around the sandbags, the cars left front struck the right side of the bus that was trying to overtake. The car had detected the bus but predicted it would yield. The test driver behind the wheel also made that assumption.

Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day, said Google of the incident.

The weather

Adverse weather conditions create visibility problems for both people and the sensors that power driverless technology. Rain can reduce the range and accuracy of laser-based LIDAR sensors, obscure the vision of on-board cameras and create confusing reflections and glare. In a bid to improve the performance of driverless technology in soggy conditions, Google has started testing its cars on public roads near Seattle, where regular rain is guaranteed.

Hacking

As cars become more hi-tech they become more vulnerable to hacking. With driverless vehicles, the extra computers, internet connectivity and sensors increase the possible vulnerabilities. In a proof-of-concept attack, security researcher Jonathan Petit showed that lidar can be easily fooled into detecting a non-existent obstacle using a handheld laser pointer. This can force the car to slow down, stop or swerve.

Humans behind the wheel of cars with self-driving tech

Just as humans are at fault in more than 90% of car accidents, so too can they be the weakest link in semi-autonomous vehicles. Particularly when a functionality has been labelled as autopilot it can be all-too easy to prematurely place trust in the machine. Maybe these intermediate levels [of automation] are not a viable consumer product, says Richard Wallace, the director of the Transportation Systems Analysis group within the Center for Automotive Research. They go a little too far in encouraging drivers to check out and yet they arent ready to take control.

Other people on the road

Its not just the humans inside cars with self-driving technology, but those in other vehicles that need to be vigilant. Accident rates involving driverless cars are twice as high as for regular cars, according to a study by the University of Michigans Transportation Research Institute which looked at data from Google, Delphi and Audi. However the driverless cars werent at fault they are typically hit from behind by inattentive or aggressive humans unaccustomed to self-driving motorists being such sticklers for the road rules.

To address this, Google has programmed its cars to behave in more human, familiar ways, such as inching forward at a four-way stop to indicate theyre going next. However, the cars super-quick reaction times when faced with an obstacle can still take human drivers by surprise.

Read more: https://www.theguardian.com/technology/2016/jul/05/tesla-crash-self-driving-car-software-flaws