Another position to take against driverless cars (or rather, the proliferation and eventual obligation thereof) is the loss of freedom and control. Instead of the fault of human causing harm to another human, it's replaced with the fault of an opaque, massively complex algorithm; and that is rather unsettling to some people.
There's the potential for bad actors to plant dangerous training into these algorithms. Hypothetically, someone could turn an entire fleet of self-driving taxis into an assassination mechanism. Every vehicle can wait until a specific individual is spotted crossing the street and fail to stop at a light. The corporation that owns this taxi then will say there was a software failure with the one specific taxi, take it out of commission, and move on with business as usual. A few deaths a year is still less than the death toll of human-operated cars, after all.
Interesting question that I don't know the answer for, how many of these concerns were similarly raised with the rise of the car or the train before it?