Who do you trust more, humans or machines? Think for a second: what was your first response to yesterday’s news that Uber would put autonomous cars on the streets of Pittsburgh to pick up real live passengers? Was it, oh yeah! or was it, oh no!?
I felt a twinge of both, to be honest. The idea that we’re approaching the age of autonomous transportation more quickly than many had speculated is exciting. But then I pulled back as I imagined putting my own safety or that of my family into the hands of Uber’s early-stage AI technology, even with a human in the driver’s seat. After all, this is Uber we’re talking about, the company notorious for bullying its way into local markets and running roughshod over regulations. A company that asks forgiveness, not permission. A company that adds safety features only after something horrible happens. On one hand it seems reckless, on the other it seems bold. Let’s break down the two arguments.
First up, the Uber is reckless camp, worried that Uber is putting passengers inside of 2020 technology four years too soon.
Why put passengers at risk? Uber doesn’t need live passengers to test its self-driving technology on Pittsburg’s streets. If it’s looking to test real-world use cases it can retrace the exact routes previously driven by Uber drivers. After all, Uber is a technology company that can track its drivers’ every movement.
Autopilot technology is so capable and works so well most of the time that it’s natural for drivers to get lulled into a false sense of security. So, despite having an Uber driver “with their fingertips on the wheel,” it’s no guarantee the driver will override an errant vehicle. Google’s autonomous car suffered a low-speed fender bender in February despite having a driver behind the wheel.
Why isn’t Uber talking about the max speed of its autonomous passenger service? Google’s cars are limited to just 25mph — unthinkably slow for a taxi. Yet high speeds mixed with autonomy can be deadly, even with a human in the driver’s seat.
Still, someone has to be first. Someone has to approach that future cash cow, get down on their knees and suckle at the teat of innovation. So why not Uber, a company known for its bold behavior?
Isn’t imperfect driverless technology better than none at all given that 94 percent of auto accidents are caused by human error?
Uber (and Volvo) is motivated to be safe. Imagine the impact of this headline: “Child dies in Uber autonomous car crash.” The company can’t afford major missteps along the road to replace its 1 million drivers. Instead, Uber needs early success to open more doors with local governments around the world: “Look what we did in Pittsburgh, now let us do it in [insert city] too”
What’s so great about human drivers anyway. Uber drivers aren’t immune to accidents, some having resulted in deaths. And it’s not like all human drivers are stable. Robotic cars don’t rape and they don’t pick up passengers in the middle of a murder spree.
For many, I’m guessing that the ultimate litmus test of their convictions will be money. Uber’s self-driving taxis will be free instead of charging the local rate of $1.05 per mile. And they’ll arrive at random, slathered in lots of gee-whiz technology. I know I’d climb into one if given the chance, despite my misgivings — but I’m enamored by smart home technology so I can’t be trusted on this.
What about you? Is Uber’s self-driving passenger service too reckless for 2016, or is it a bold attempt to make our streets safer?