Lines of Sight

Alex Hanna, Emily Denton, Razvan Amironesei, Andrew Smart, and Hilary Nicole in Logic:

On the night of March 18, 2018, Elaine Herzberg was walking her bicycle across a dark desert road in Tempe, Arizona. After crossing three lanes of a four-lane highway, a “self-driving” Volvo SUV, traveling at thirty-eight miles per hour, struck her. Thirty minutes later, she was dead. The SUV had been operated by Uber, part of a fleet of self-driving car experiments operating across the state. A report by the National Transportation and Safety Board determined that the car’s sensors had detected an object in the road six seconds before the crash, but the software “did not include a consideration for jaywalking pedestrians.” In the moments before the car hit Elaine, its AI software cycled through several potential identifiers for her—including “bicycle,” “vehicle,” and “other”—but, ultimately, was not able to recognize her as a pedestrian whose trajectory would be imminently in the collision path of the vehicle.

How did this happen? The particular kind of AI at work in autonomous vehicles is called machine learning. Machine learning enables computers to “learn” certain tasks by analyzing data and extracting patterns from it. In the case of self-driving cars, the main task that the computer must learn is how to see. More specifically, it must learn how to perceive and meaningfully describe the visual world in a manner comparable to humans. This is the field of computer vision, and it encompasses a wide range of controversial and consequential applications, from facial recognition to drone strike targeting.

Unlike in traditional software development, machine learning engineers do not write explicit rules that tell a computer exactly what to do. Rather, they enable a computer to “learn” what to do by discovering patterns in data. The information used for teaching computers is known as training data. Everything a machine learning model knows about the world comes from the data it is trained on.

More here.