Self-Driving Cars: The Good, the Bad & the Ugly
Is Better than Humans Enough?
Missy Cummings
Director
Autonomy and Robotics Center
George Mason University
Sponsored by PSW Science Members Erica & Bruce Kane
About the Lecture
Self-driving cars have been a dream from almost the time the automobile was invented. With the rise of artificial intelligence (AI), this dream has seemingly become reality with driverless commercial operations already taking place in a handful of cities around the world. However, the recent tragic accident involving a pedestrian and a Cruise self-driving car, as well as a number of high-profile Tesla crashes, raise the possibility that such systems may not actually be as capable as envisioned, and questions have arisen about their safety both nationally and internationally. Given these concerns, it is important to step back and analyze both the actual safety records of these vehicles and just why AI is struggling to operate safely under all conditions in autonomous vehicles. This talk will highlight the strengths and weaknesses of AI in self-driving cars, as well as in all safety-critical applications, and lay out a roadmap for safe integration of these technologies on public roadways.
Selected Reading & Media References
https://spectrum.ieee.org/self-driving-cars-2662494269
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/7394
About the Speaker
Mary (Missy) Cummings is Professor of Mechanical Engineering; Computer and Electrical Engineering; and Computer Science and Director of the Mason Autonomy and Robotics Center (MARC) at George Mason University. Before joining the GMU faculty, she served as the Senior Safety Advisor to the US National Highway Traffic Safety Administration, and held positions at Duke University and MIT. Prior to her academic career, Missy served as a naval officer and navy fighter pilot, one of the first female fighter pilots in Navy history.
Missy’s research interests focus on the application of artificial intelligence in safety-critical systems, assured autonomy, human-systems engineering, and the ethical and social impacts of technologies.
Among other honors and awards, Missy is a Fellow of the American Institute of Aeronautics and Astronautics (AIAA). She serves on several national and international committees including co-chairing the World Economic Forum’s Global Future Council of Autonomous Mobility, and she is a member of the National Academies Committee for AI for Scientific Discovery.
Missy earned a BS Mathematics at the US Naval Academy, an MS in Space Systems Engineering at the Naval Postgraduate School and a PhD in Systems Engineering at the University of Virginia.
Minutes
On December 1, 2023, in the Powell Auditorium of the Cosmos Club in Washington, D.C., President Larry Millstein called the 2,486th meeting of the Society to order at 8:15 p.m. ET. He began by welcoming attendees, thanking sponsors for their support and announcing new members. Cameo Lance then read the minutes of the previous meeting and the lecture by Robert Truog on “Defining Death”. The minutes were approved, subject to a minor addition suggested by President Millstein.
President Millstein then introduced the speaker for the evening, Missy Cummings, of George Mason University. Her lecture was titled, “Self-Driving Cars: the Good, the Bad, & the Ugly.”
The speaker began by briefly sharing her background, not only as a fighter pilot, but as a researcher focused on unmanned aerial vehicles. She discussed her shift to working on self-driving cars, emphasizing similarities between the sensors used in drones and autonomous vehicles, and how she then shifted her focus to self-driving car policy, and her work on this with the National Highway Traffic Safety Administration.
The speaker then discussed the workings of self-driving cars, the digital nature of their controls, and the importance of building a world model for navigation. She discussed the sensor technologies used in self-driving cars, such as radar, LiDAR, and computer vision, while addressing their limitations and challenges. She emphasized the importance of combining multiple sensors for comprehensive coverage.
Cummings then explained the challenges of sensor technologies, especially in adverse weather conditions, and she offered a cautionary note about the limitations of computer vision based on neural networks which, she said, can have "hallucinations" when encountering unforeseen scenarios.
She emphasized the misconception that internet connectivity is essential for autonomous vehicles, and also highlighted the potential dangers of relying on internet signals for critical decisions. The speaker discussed the concept of the "stack" in self-driving cars, representing the layers of information processing. She explained that the stack takes in data from sensors, formulates driving strategies, plans maneuvers, and executes actions, drawing parallels with human decision-making.
The speaker then discussed disengagements: instances where humans take control (human disengagement) or autonomous systems relinquish control (autonomy-initiated disengagement). She presented data compiled by a number of companies on disengagements, particularly focusing on data from Waymo and Cruise. She expressed concerns about the transparency with which companies report disengagements.
The lecture continued with a brief overview of crash odds ratios, comparing the crash rates of self-driving cars to human drivers in urban environments. The limited data and the need for more extensive testing (250 million miles) to draw statistically significant conclusions was highlighted. She noted that that she and others have not been able to access the raw data that Tesla has obtained on accidents and incidents for its vehicles with autonomous driving features.
The speaker emphasized the importance of learning from mistakes, drawing parallels between human errors in driving and potential human errors in coding for autonomous systems. She recounted an incident involving a crash with a semi-tractor trailer due to a coding error, caused by the failure to clear a memory buffer.
Cummings then discussed an incident in San Francisco’s Bay Bridge Tunnel, where a car abruptly slammed on its brakes, causing a multi-vehicle pile-up. The phenomenon, she said, called phantom braking, can result when the car's system misinterprets its surroundings. She said that it is a widespread problem in autonomous vehicle systems.
She went on to provide a detailed example showcasing a planning problem in autonomous driving where a self-driving car, misinterpreting a right-turn-only lane, abruptly stopped, leading to a collision with a rideshare car. The speaker emphasized that this particular company did not embed physics-based models which may have prevented this crash.
The speaker emphasized the importance of updating models as the environment changes. An incident was discussed involving a self-driving car hitting an articulated bus which was attributed to the computer vision system's reliance on training data from regular buses, leading to a miscalculation with an accordion bus.
Cummings discussed the limitations of systems which rely on artificial intelligence, particularly in the context of self-driving cars. The speaker emphasized that systems like ChatGPT operate based on statistical inference and lack sentience or the ability to discern truth from fiction. The speaker ended the lecture by proposing the need for better collaboration between humans and AI, leveraging their respective strengths.
The lecture was followed by a Question and Answer session.
During the session, questions were raised about the regulatory landscape for autonomous vehicles, and in particular, why they are still allowed on the roads. The speaker indicated that the main reason is that these systems are considered driving assists rather than a full self-driving technology. Tesla's autopilot, for example, requires drivers to sign statements acknowledging their responsibility to maintain control and comply with warnings. But people often become complacent and overly reliant on these systems she said.
A member asked about standards and testing for self-driving cars, particularly regarding non-deterministic AI. The speaker acknowledged the lack of standards and emphasized the challenges of testing non-deterministic AI, pointing out the absence of requirements for companies to demonstrate their testing.
Another question addressed Tesla's decision to remove LiDAR technology and inquired about the potential benefits of vehicle-to-vehicle communication. The speaker explained Tesla's initial avoidance of LiDAR was due to cost concerns. She referred to vehicle-to-vehicle communication as a "nice to have", rather than a necessity.
The Q&A session concluded with a question about the positive aspects of self-driving cars, to which the speaker mentioned their potential effectiveness in specific applications, such as slow-speed shuttles and last-mile food delivery. However, she expressed her overall skepticism about the feasibility of widespread adoption of fully autonomous vehicles for long-distance travel in the near future.
After the question and answer period, President Millstein thanked the speaker and presented her with a PSW ribbon, a signed copy of the announcement of her talk, and a signed copy of Volume 1 of the PSW Bulletin. He then announced speakers of up-coming lectures, made a number of housekeeping announcements, and invited guests to join the Society. He adjourned the 2486th meeting of the society at 10:10 pm ET.
Temperature in Washington, DC: 8° C
Weather: Overcast and light rain
Audience in the Powell auditorium: 49
Views of the video in the first two weeks: 380