“The Automatic Emergency Braking (AEB) or Autopilot systems may not function as designed, increasing the risk of a crash.” It's a simple sentence, delivered with the calm finality of bureaucratic certainty. It is a literal post-mortem, the bottom-line-up-front from the National Highway Traffic Safety Administration's investigation into the first fatal crash of an autonomous car—one made by Tesla Motors. The investigation into the crash closed today, and it will likely cast a long shadow over the future of self-driving cars, which have long been heralded as potentially life-saving devices.
Human drivers are imperfect pilots, placed in command of a couple thousand pounds of fast-moving metal. We're just not equipped for the task: The eyes that evolved pointing forward (to better navigate our ancestral home in the trees) mean a smaller field of vision. Even with well-positioned mirrors, a human driving a car in three-dimensional space is bound to have blindspots. But what if the car itself didn't? What if cars could sense where they were, and then communicate that information to other cars? Suddenly, a dense road of imperfectly piloted vehicles would become a smart, safe network, with the cars themselves constantly pinpointing one another in space and time.
Long before we have fully autonomous cars – ones in which we can kick back and catch up on Westworld as our faithful robot delivers us safely to work – we'll live in a world of shared responsibility. Tesla Autopilot and other upcoming advanced semi-autonomous systems require the driver to stay alert, should a situation arise in which humans need to step in and take the wheel.
In 1957, automakers simply did not make the engineering of its race cars available to the average (even supremely wealthy) driver. Jaguar did. In the 1950s, it let loose its classic XKSS, a model based on the D-Type that won LeMans three years running. It was the world's first supercar.