TechEnsuring Safe Autonomous Vehicles: Strategies and Challenges

Ensuring Safe Autonomous Vehicles: Strategies and Challenges

Researchers have shown that understanding what you don’t know can help reduce mishaps as computer-driven vehicles and aircraft grow more prevalent.

Illustration of green cars driving on multilane highways with one central yellow car surround by a field of glowing...
Illustration of green cars driving on multilane highways with one central yellow car surround by a field of glowing…

Dive into the realm of autonomous vehicles safety, where researchers unveil groundbreaking strategies to mitigate uncertainties and enhance the reliability of perception systems and machine learning algorithms. Discover how understanding unknowns ensures a safer future for self-driving technology.

The original version of this story appeared in Quanta Magazine.

Autonomous vehicles and aircraft are no longer a thing of the future. Up to August 2023, two taxi companies in San Francisco alone had amassed 8 million kilometers driven autonomously. Additionally, if one excludes drones owned by the military, there are more than 850,000 registered drones in the US.

However, there are justifiable worries regarding security. For instance, the National Highway Traffic Safety Administration recorded roughly 400 crashes involving cars utilizing autonomous driving over a 10-month period that concluded in May 2022. These incidents resulted in six fatalities and five serious injuries.

The standard approach to resolving this issue is to test these systems until you’re confident they’re safe—a process frequently referred to as “testing by exhaustion.” However, you can never be certain that every possible flaw will be found using this technique. “People carry out tests until they’ve exhausted their resources and patience,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign. However, testing by itself is not a guarantee.

Mitra and associates are able to. His group has successfully demonstrated the safety of autonomous aircraft landing systems and lane-tracking capabilities for automobiles. Boeing intends to test its approach this year on an experimental aircraft, and it is currently being utilized to assist in the landing of drones on aircraft carriers. “Their method of providing end-to-end safety guarantees is very important,” said Corina Pasareanu, a research scientist at Carnegie Mellon University and NASA’s Ames Research Center.

They are responsible for ensuring the output of the machine-learning algorithms that provide information to autonomous cars. A perceptual system and a control system are the two main parts of many autonomous cars. For example, the perception system can inform you how distant your automobile is from the center of the lane or the direction and angle of a plane’s flight path with regard to the horizon. The system works by providing machine-learning algorithms based on neural networks with raw data from cameras and other sensory instruments. These algorithms then recreate the scene outside the car.

The control module, a different system, receives these evaluations and makes a decision. For example, it determines whether to steer past an impending impediment or apply the brakes. While the control module uses proven technology, Luca Carlone, an associate professor at the Massachusetts Institute of Technology, claims that “it is making decisions based on the perception results, and there’s no guarantee that those results are correct.”

To provide a safety guarantee, Mitra’s team worked on ensuring the reliability of the vehicle’s perception system. They initially believed that when a flawless representation of the outside world is available, safety could be guaranteed. The amount of mistake the perception system adds to its reconstruction of the vehicle’s surroundings was then ascertained.

Quantifying the associated uncertainties, also referred to as the “known unknowns” or the error band, is crucial to this tactic, according to Mitra. The basis for that computation is what he and his group refer to as a perceptual contract. A contract in software engineering is an assurance that the output of a computer program will, for a given input, fall within a particular range. It’s difficult to determine this range. How precise are the sensors in the car? How much fog, rain, or solar glare can a drone tolerate? But if you can keep the vehicle within a specified range of uncertainty, and if the determination of that range is sufficiently accurate, Mitra’s team proved that you can ensure its safety.

Person wearing a dark grey button down shirt and crossing their arms while posing in front of a dark grey chalkboard...
Person wearing a dark grey button down shirt and crossing their arms while posing in front of a dark grey chalkboard…

Anyone with an inaccurate speedometer is familiar with this scenario. You can still avoid speeding if you know the device is never off by more than five miles per hour. Just make sure you always keep five miles per hour below the speed limit (as indicated by your unreliable speedometer). A similar assurance of the security of a machine learning-based flawed system is provided by a perception contract.

Carlone stated, “You don’t need perfect perception.” “All you want is for it to be sufficient enough to not jeopardize safety.” He claimed that “introducing the entire idea of perception contracts” and offering the construction techniques are the team’s greatest accomplishments. They did this by drawing on techniques from the branch of computer science called formal verification, which provides a mathematical way of confirming that the behavior of a system satisfies a set of requirements.

“They demonstrated that it is still possible to prove numerically that the uncertainty of a neural network’s output lies within certain bounds, even though we don’t know exactly how the neural network does what it does,” Mitra said. And in that instance, the system will be secure. “Thereafter, we can offer a statistical assurance regarding the likelihood (and extent) to which a specific neural network will truly satisfy those limitations.”

While landing a drone on an aircraft carrier, Sierra Nevada, an aerospace company, is testing these safety assurances. The additional dimension of flying makes this challenge somewhat more complex than operating a vehicle. According to Dragos Margineantu, chief technologist for artificial intelligence at Boeing, “there are two main tasks in landing: aligning the plane with the runway and making sure the runway is clear of obstacles.” We are working with Sayan to obtain assurances for those two roles.

“Sayan’s algorithm simulations demonstrate that an airplane’s alignment improves before landing,” he stated. The next phase is to use these devices during the actual landing of a Boeing experimental aircraft, which is scheduled for later this year. Figuring out what we don’t know—”determining the uncertainty in our estimates”—and observing how it impacts safety will be one of the main problems, according to Margineantu. “Most mistakes are caused by actions we take that we believe we understand but actually don’t.”

This article was originally published in Quanta Magazine, an editorially independent journal of the Simons Foundation. Its goal is to improve public understanding of science by reporting on trends and developments in mathematics, the physical and biological sciences, and research advances in these fields. Reprinted with permission.

Conclusion

As autonomous vehicles become more prevalent, addressing safety concerns is paramount. Researchers are pioneering innovative strategies to ensure the reliability of perception systems and machine learning algorithms, essential for safe autonomous operation. By quantifying uncertainties and providing safety guarantees, we pave the way for a future where self-driving technology can be trusted and embraced with confidence.

— ENDS —

Connect with us for the Latest, Current, and Breaking News news updates and videos from thefoxdaily.com. The most recent news in the United States, around the world , in business, opinion, technology, politics, and sports, follow Thefoxdaily on X, Facebook, and Instagram .

Popular

More like this
Related

Trump Hitler Comment : Trump made a claim that Hitler “did a lot of good things.”

In ShortTrump's comment: Allegedly praised hitler during a...

What are the symptoms of Covid-19? A test is the only way to be sure.

The days when a fever was a dead giveaway...

Tim Weah sees red when Berhalter’s USMNT drifts off on a meaningless trip.

The coach may have one more chance to keep...

Biden’s performance in the debate raises concerns for Democrats

In ShortDebate performance: Biden struggled during the debate,...