My brother lives in the neighborhood where this public beta testing is occurring. Folks, let me tell you from first-hand experience- these cars are terrible drivers. They are extremely hesitant, and because of this they cause traffic backups and delays in places where safety is compromised. You better hope you are not behind one when they perform a left-hand turn onto a busy street, because you are probably going to be waiting a while. They stop in seemingly random situations where human drivers would never stop, like a bicyclist standing way off the side of the road (talking to someone) with his bike pointing towards traffic. People have told me they've been in the middle of a turn behind a Waymo car, when all of a sudden the car just randomly stops for no apparent reason, leaving them stranded in the middle of the intersection as the light turns red. The default reaction for a Waymo car under any sort of ambiguity seems to be to just stop (literally) until the ambiguity resolves itself. Unfortunately this causes considerable confusion and traffic backups for human drivers.
I understand why Waymo is proceeding cautiously. But you really get the impression that the technology has a long ways to go. People in Chandler are pissed because this fleet of cars is a huge nuisance and is in fact less safe for human drivers who are sharing the road. I have no doubt that autonomous vehicles will be safer than human drivers someday, but that day is definitely not today.
This is pretty much spot on with my experience living in Chandler.
I recently drove behind one of these for one mile and it managed to mess up three times. All three times it did exactly what you said:
> The default reaction for a Waymo car under any sort of ambiguity seems to be to just stop (literally) until the ambiguity resolves itself.
In my experience (over one mile), this meant
1: stopping at the first intersection and not turning right (on a green light) until there were no cars going straight through it.
2: Then driving down the next there was a plastic grocery bag in the road. I thought about moving to the other lane anticipating the car not handling the obstacle well, but it wasn't slowing down so I decided not to. Then 10 feet from the bag it just decided to go 0-100 on the breaks and come to a complete stop - even though the bag didn't move, it fortunately decided to keep driving (not sure why it would think it had to full stop, but then without anything changing decide it was safe).
3: Left hand turn on a non-busy intersection. Zero cross traffic. Actual zero, not hyperbole zero. Not a single car coming. The waymo car just sat in the intersection through the entire light and 5+ seconds into the cross traffic's green light before the driver intervened.
I've found them very frustrating to be near in any situation that isn't straight driving. Even driving straight through construction zones is something they can't handle, and something we currently have a lot of.
I dunno, there wasn't really any point to my post, I just hate having to drive with them and wanted to vent.
The default reaction for a Waymo car under any sort of ambiguity seems to be to just stop
They do this every damn trip when they turn right from Central Expressway onto Castro in Mountain View. They wait for an unnaturally long break in traffic in both lanes, not just the near lane, to turn.
It's not even a very unusual edge condition, and they seem to be uninterested in fixing it.
> People have told me they've been in the middle of a turn behind a Waymo car, when all of a sudden the car just randomly stops for no apparent reason, leaving them stranded in the middle of the intersection as the light turns red.
How does that work? My driving experience is limited, and only in Australia, but I can't think of a situation where I'd be allowed to enter an intersection if there is already a car ahead of me that's still in it.
Edit: I'm getting downvoted so maybe I didn't explain it right. Here's a quote from AU law: It is against the law to enter an intersection if you cannot drive through and into the road you plan to enter. However, when turning right, you can proceed into the intersection and wait near the centre of the intersection for the oncoming traffic to pass (as long as it is safe and the road you are turning into is clear). So from my POV it's pretty unfair to blame Waymo for getting stuck in a situation when it's only your law breaking that's making it so dangerous.
I hear what you're saying about the letter of the law and all, but the fact of the matter is that at big intersections people don't drive that way in any country that I've been to. Waymo cars have an annoying tendency to just abruptly stop in the middle of intersections when they clearly have the right of way- scenarios when a human driver would rightfully not stop. So I do blame them for causing totally unnecessary traffic snags.
All you have to consider is the possibility that in some other regions of the world, that law doesn't exist and you're allowed to trail the car in front of you to make the turn you want. This is because for busy intersections, the closer you are to making the turn, the more chance you have of making it successfully. In general, there is greater throughput, and the safety risks are minimal because the yellow light signals for oncoming traffic to stop and gives enough time for everyone waiting to make the turn before it goes red (at least, that's what a yellow light is supposed to be, some people run it, and some people even run red lights).
I take it that in Australia, you drive on the left side of the road? In North America, you drive on the right side of the road, so the logic would apply when you're turning left, not right.
The rule has nothing to do with the handedness of the traffic. The rule comes down to 'do not enter an intersection unless you are 100% sure you will clear it before the lights change'. If you enter an intersection so late that it's about to turn red while another car is waiting to turn, you're at fault, not the car in front of you for not turning sooner. The red light transition timing is set to allow one and only one car to exit the intersection before the other side has a green light to enter.
The rule exists to prevent intersections from getting blocked by idiot drivers who enter too late and get stuck, which if not immediately cleared will cause gridlock to begin forming.
I find it a bit hard to believe that such a rule doesn't exist in America.
In Ontario, the law doesn't say where cars should wait when waiting to turn left at a green light.
Apparently, that silence means a queue is allowed. In an e-mail, Ontario's Ministry of Transportation (MTO) tells Globe Drive that "more than one car is allowed to enter an intersection on a green light to line up to make a left turn, provided that the car's turn signal is on before it enters the intersection."
But, beware of local bylaws. If an intersection has a sign that says "Do not block intersection," you shouldn't be waiting inside it to turn, police say.
When I was driving (I don't really drive anymore), I found it common practice to queue in the intersection while waiting for a left turn.
> The rule comes down to 'do not enter an intersection unless you are 100% sure you will clear it before the lights change'.
Yes, that is how it's supposed to work here. But when you enter because there isn't any traffic coming and you assume the waymo vehicle will do the obvious and turn left, and then it just stops, it's very easy to get stuck out in the intersection behind it - in a situation that wouldn't happen with any human driver.
Might I introduce you to South Florida then. Overly cautious 80+ year old drivers that cause accidents for being just too cautious/slow to react. That’s what the Waymo cars sound like right now.
It's a local option in the US. Large cities are stricter about it. Some large cities paint a big box on intersections and ticket people for blocking the box.
I have wondered how autonomous cars are supposed to handle a scenario where common sense dictates a slight violation of the traffic rules. For example, let's say that you're on a long, narrow road in the mountains where passing is forbidden for many miles. If there's a slow vehicle in front of you, such as a cyclist or a tractor, normally you wait for an opportunity for a safe pass and you take it. An autonomous car however, would just follow it, costing you many minutes of your time.
A good example would be the area around Lancaster Pennsylvania. With Amish horse drawn buggies, people pass them on double yellow lines all the time. I don’t know if that is legal, but where I live in New Mexico we have a similar, if less common occurrence with farm equipment on the road—everyone passes them, though technically it’s a traffic violation. In some of these instances the speed difference is significant enough (10mph in a 50mph zone) that it could easily be more significant than 10 minutes on an hour drive.
It makes me wonder about traffic laws, and whether there will be a push to make these behaviors more explicit laws because there is potentially huge issue if Waymo or other companies program breaking laws into their driving code. Add to that that traffic laws in the USA are primarily state laws, but municipalities frequently have additional laws. The laws are all mostly the same, so I suspect autonomous vehicles will be programmed to the most restrictive of the laws.
I picture the future of self-driving cars will free up the human to do other things. If I have a desk set up in the car and am working/relaxing then I won't be too concerned if my 1-hour trip takes 1 hour and 10 minutes.
The big advantage will be longer trips. I live in Australia and we have huge distances to cover when driving between cities. If I can "work from car" for 8 hours while driving interstate and my employer pays me then the trip is essentially free for time.
"The trouble started, the couple said, when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac."
Driving annoyingly cautiously is definitely unwanted. However, running over one child will see the technology shelved for about two generations, at least. So I can see why at this point the algorithms are still made such that the car stops at any sign of potential trouble.
It's a race for profits, not safety. Someone is going to build this technology and it's a race to be the first and to collect the huge bounty. They don't care if they inconvenience you or cause a risk to your safety if it means they get to collect a ton of money.
This directly contradicts how they are proceeding. The problem is that they are being too cautious and not rushing for profit, the way Uber did which lead to a death.
I personally much prefer a couple slow turns to someone dying.
> The default reaction for a Waymo car under any sort of ambiguity seems to be to just stop (literally) until the ambiguity resolves itself. Unfortunately this causes considerable confusion and traffic backups for human drivers.
Honestly, to me this just sounds like the waymo cars being better drivers than humans and not breaking traffic rules and endangering others. Just like complaining about new drivers or driving school drivers for being "too slow", while in reality they are just the only ones not going over the speed limit and not shoving into lanes hoping others will compensate.
We could go back and forth on this all day, but I don't think you would say these vehicles are good drivers if you have actually driven amongst them. I recommend that you follow one of these Waymo vehicles if you're ever in Chandler or in Mountain View. Being extremely hesitant is not good driving. Slamming on the brakes when a newspaper blows in front of your car is not good driving. Take a left turn behind one or go to a four-way stop behind one. I really hope Waymo improves.
“People are lashing out justifiably," said Douglas Rushkoff, a media theorist at City University of New York and author of the book “Throwing Rocks at the Google Bus.” He likened driverless cars to robotic incarnations of scabs — workers who refuse to join strikes or who take the place of those on strike.
“There's a growing sense that the giant corporations honing driverless technologies do not have our best interests at heart,” Mr. Rushkoff said. “Just think about the humans inside these vehicles, who are essentially training the artificial intelligence that will replace them.”
Why is it that for every act of random vandalism or hooliganism, there's always an academic who will find a way to blame the victim?
Context suggests people are justifiably upset, not that these specific actions where justified. Google is putting their community at risk without any form of compensation. Much like dumping toxic waste into public streams it’s public harm for private gain.
PS: Never trust a quote that’s that short, it’s been stripped of all meaning.
Looking at the actual facts, there appear to be little justification for this violence, especially since it affects the human safety drivers the most and they are definitely not any kind of aggressor here.
Some recent quotes from the USA Today article covering the Waymo self-driving car hailing service:
“"But mostly it just makes me feel safe. One time, the Waymo (vehicle) paused before turning, and I wondered why. Then a car ran the red light and crashed into the median. It saw that car way before I did."”
“But the fact remains that after millions of miles of city driving, another 7 billion miles of virtual testing and countless more tests undertaken at a private faux-city facility in California, Waymo vehicles have yet to cause a major accident. When fender-benders do happen, often it's because human drivers bump into the robot cars.”
The real issue is that we need more miles driven to really be certain that this safety record will hold. Thwarting this testing will probably endanger more people because the tech will become mainstream whether anyone likes it or not. It would be a lot better if the tech is over-tested than under-tested.
Yes. How many seconds per trip does Waymo's conservative driving cost? Probably not very many. Left turn delays, the main complaint, are a problem only at intersections with heavy traffic but no left turn signal. That can be dealt with partly by routing and partly by bugging municipalities for left turn signals at key points.
UPS drivers almost never turn left.[1]. It is both safer and more efficient to plan your route using almost only right turns.
I am sure Waymo knows this but I'd guess they don't want passengers to feel like the car is taking "crazy" routes. People could act in unpredictable ways if they felt the car was out of control and turning away from their destination.
In the long term, when people are more comfortable with the technology, I suspect they'll change this.
We are all entitled to our opinions. My opinion is that these vehicles are a lot safer than most human drivers and I do not see any justification for the vitriol displayed.
Think about it - running a car off the road so you can yell at them to display your displeasure? That’s violence and no one should be subjected to that.
Despite what people like to say, our institutions work and work a lot better than many around the world. If you want change, you have to work for it and convince others of your point. Democracy is not about acting out, it’s about civil discourse. All sides deserve to be heard, but in a civil manner.
> My opinion is that these vehicles are a lot safer than most human drivers
Eventually, that’s likely to be true, but currently these are sill in the experimental stage and regularly have issues. The collective benifit is probably a net win long term. But, they are not replacing google street view cars or something so as to spread that risk out. Instead they are concentrating that risk into a small subset of residential streets in some poor neighborhoods.
Even if it was already slightly safer than human drivers the extra traffic really risks these people’s lives. But because they are low income they don’t expect to be able to afford self driving cars in the next 10+ years. So, that’s meaningful harm without the exception to benifit.
Now how a tiny subset of a community responds is predictable but not justifiable.
> Instead they are concentrating that risk into a small subset of residential streets in some poor neighborhoods.
These cars already safer than human drivers and they are common on the streets around Google. So unless you mean "risk" as "risk of being slightly inconvenienced", neither of your points is accurate.
> But because they are low income they don’t expect to be able to afford self driving cars in the next 10+ years.
It is pretty clear that Waymo primarily intends to sell a taxi service, not cars directly. This taxi service would be cheaper and safer than any currently available so people of all incomes would benefit both directly and indirectly (fewer DUI drivers).
Taxi are expensive luxuries, poor people use busses.
Also, the last study I found had self driving car tests result in 4x the rate of injury’s vs normal drivers. This should be improving over time and the sample size was 11 injuries over 1.1 million miles, but even still systems in R&D are inherently risky.
Ride-Hailing Apps May Benefit Poor and Minority Communities The Most, Study Suggests.[1]
Waymo has caused a single crash in 5 million miles of driving and no one was hurt.[2] They have been involved in slightly more incidents than would be predicted for a human driver (30 vs. 21) in those 5 million miles but it was not at fault for the others.
Waymo spokesperson called the research "deeply flawed" and said it "reflects a poor understanding of the data as it fails to distinguish between a vehicle that drives hundreds of miles per day on public roads, with one that seldom leaves its garage."
In other words they don’t disagree with the number of crashes, even when excluding other areas.
> First injury is more likely in residential areas.
I'm not sure what new argument you are attempting here. Mountain View has lots of heavily residential areas if you are implying otherwise.
The Waymo spokesman is completely right, this is a very stupid way of ranking safety. This article is more recent than the one I cited and includes more than twice the number of testing miles so the raw number of incidents is unsurprisingly higher.
One must not implicitly attribute such things to context either, when there is not enough text to say so. Are all the reasons for being upset justifiable? Safety reasons probably are. Are Luddite reasons justifiable as well?
Luddite reasons are IMO understandable but not justifiable reasons to be upset.
My point was their was not enough context to say what the quote was about and you should treat such three word quotes as semantically meaningless “purple happy banana.” Still even ignoring the quote, safty is a legitimate issue.
It's absolutely not justifiable, because it's people taking the law in their own hand. If you disagree with how your city is being run, vote and participate. This is by definition bypassing the law and that's not what a civilized society does. If you call this justifiable, then due-process is right out of the window.
It could also just be a poor choice of words, for which the charitable interpretation is that he's explaining.
Let's say, hypothetically, that he uses the word "understandably" instead. Would you consider that explaining or rationalizing? If you would consider that explaining, then maybe he just chose his adverb poorly. If you would consider that rationalizing, then I don't think there's a charitable framing of the point that you would accept as just explaining.
People here clearly disagree, which is fine. However, I'm seeing a lot of the unnecessary habit of discourse where people are trying to frame a disagreement as moral high/low ground.
>There's an important difference between blaming the victim and trying to explain why the attackers are doing what they are doing.
This is the quote
>“People are lashing out justifiably," said Douglas Rushkoff, a media theorist at City University of New York and author of the book “Throwing Rocks at the Google Bus.” He likened driverless cars to robotic incarnations of scabs — workers who refuse to join strikes or who take the place of those on strike.
He is quite plainly saying the vandalism is justified.
Also the “victim” here is, depending on your perspective, either a robot, or a multinational corporation with so little respect for privacy, taxation, and which has more money than sense, that sympathy is a non-starter. At most this is a write-off they can think of a natural tax they can’t avoid like all of financial taxes they tapdance around. I’m not seeing a possible entity to call a victim here, while the residents being used as guinea pigs actually are.
Wait -- Scientist kills innocent bystander with his experiment and the public shouldn't be mad about it? Would you have said the same thing if the professor defended Waymo?
Wait -- human kills innocent people with their distracted driving and the public shouldn't be mad about it?
Where are the people with rocks and knives and guns demanding human-driven cars be forced off the road due to the overwhelming number of people they have killed?
People are angry when distracted drivers kill, and when it happens people are forced off the road and in to jail. If reckless driving leading to manslaughter was not illegal, you might see a few angry mobs doing what you describe.
Yes but we're not talking about the driver/car/company combo that killed someone. That was a different driver, a different car, and a different company making the hardware/software. Forcing a Waymo car off the road and pointing a gun at the driver because an Uber car killed someone is not the same as what you're describing.
Generally speaking, people are not forced off the road and into jail when they kill others while operating motor vehicles, except in the most egregious cases.
It’s a running “joke” in some circles that if you want to kill someone and get away with the crime consequence-free, hitting them with your car and claiming it was an accident is the most reliable way to do it.
This would be like someone threatening you with violence and forcing you off the road because some other human driver killed someone. How is that reasonable?
> Where are the people with rocks and knives and guns demanding human-driven cars be forced off the road due to the overwhelming number of people they have killed?
People in Arizona are using guns and knives to demand all human-driven cars get taken off the road because some of them kill people? I haven't read that news story.
I feel like this rapidly moves into Trolley Problem territory. 10,000+ people die in a year due to drunk driving, what's the acceptable death toll for rolling out driverless cars as fast as possible?
Because of the uncertainty behind something new, you should have strong evidence that driverless cars are substantially safer than cars with drivers.
Additionally, if driverless cars are being tested on public roads, you need laws and practices that are robust enough to prevent the kind of accident that killed someone. They had only one operator in the car, who was responsible for monitoring the car and logging information about its performance. They had specifically disabled safety systems which would have caused it to brake sooner.
Even if its expected that driverless cars will reduce fatalities due to drunk or distracted driving, you need to have sufficient evidence of the improvement, safety systems in place for test systems, and public discussion of the tradeoffs and what situations driverless cars might perform worse in than human drivers, because lots of people have behavioral expectations based on human drivers; things like making eye contact with a driver before crossing the street.
So yes, it is good to be working on and testing driverless cars, but you need to have a serious sense of humility and safety about it, and not push it, because even if you are incredibly safe you can expect some backlash, but if you miss the mark at all and oversell the safety, you'll get a lot more backlash, with justification, much sooner, which could set the whole program back significantly.
> you should have strong evidence that driverless cars are substantially safer than cars with drivers.
No. Considering the other benefits of self-driving cars, such as increased mobility for the elderly, you can at most require that they are as safe as cars with drivers, not necessarily safer.
If the benefits were great enough (reduced pollution indirectly saving lives? saved life-years due to more efficient traffic?) one could even accept that they could be a little less safe as far as direct fatalities are concerned.
I understand your point but I doubt it would work well in reality when it comes to public opinion/backlash.
Humans are disproportionately swayed by proximate causes, so deaths directly attributed to self driving cars are going to be more heavily weighted in the court of public opinion.
decreased, in fact. A self-driving vehicle can't go anywhere a cab or rideshare can't, while the latters have drivers who can assist loading/unloading elderly and their mobility aids.
Somebody with poor eyesight and slow reaction speed is an unsafe driver but doesn't need any particular help loading/unloading from a car. A driver who can help unload is a fine thing, but having somebody drive just so they can help unload seems like a ridiculous waste of resources. A car is ready whenever you need it; a car-and-driver might not be. And yes, you could schedule a car-and-driver to be available at a particular time...but you could similarly schedule somebody to be available to load/unload at the destination.
> If the benefits were great enough (reduced pollution indirectly saving lives? saved life-years due to more efficient traffic?) one could even accept that they could be a little less safe as far as direct fatalities are concerned.
I disagree. I'm finding it hard to come up with an example that would make me think it's ok to have more people hit and killed by cars. I'm more than ready for self-driving cars to be a reality, but not if more people will die. "Saved life-years" is not good enough.
What does it mean to avoid a fatality other than saving that person's remaining life-years?
> I'm finding it hard to come up with an example that would make me think it's ok to have more people hit and killed by cars.
Entirely speculative: Heart failure is the leading cause of death in the US. If 100% self-driving traffic could let EMTs arrive faster at the scene that could save more lives than a slight increase from traffic accidents.
Wouldn’t mandatory breathalyzer ignition locks solve this problem? This is a proven technology that shouldn’t kill anyone as it’s being rolled out. Move fast and break things is great for websites and apps, but when human lives are at stake it requires strict ethical considerations.
What about all the existing cars on the road? What about manual cars that can be push started? Who installs and certifies these, they need calibrated frequently, they are extremely simple to bypass for anyone with even basic vehicle knowledge.
If the law starts now, in 5-10 years the vast majority of old cars will be junked. Sure, there will still be a few old cars, but they will be an insignificant minority. It's not unlike how other rule introductions worked. Nobody banned old cars without seatbelts, they just required that all new cars have them.
Interlock breathalyzers have a host of problems that make indiscriminate application problematic. Additionally, judicial remedies can only be applied after someone has already been caught drunk driving. So even if we applied them to all prosecuted drunk drivers, it is not a complete prophylactic.
Self-driving cars have killed at least 3 people this calendar year despite there being several millions fewer self-driving vehicles on the road. Scale up those proportionate rates, they would be killing far more than 10,000 people a year.
So at this point, self-driving cars still have quite a bit of work to go to prove that they are actually safer than a drunk driver, let alone a sober driver.
If you think drunk drivers are bad, sober drivers kill more than twice as many people! The actual number to beat is something like 37,000 annually[0]. And I suspect your "millions" factor has an extra zero or two when you account for (a) miles driven (self-driving cars are probably more highly utilized) and (b) driving conditions (city is more interesting for training data than highway, whereas probably more person-driving is done on highways).
Uber killed the pedestrian, not Waymo. But that's hardly any comfort to the elected representatives of Arizona. I suppose the execution, management and oversight of the corporation really matter, but at this early stage of the industry it's hard to look at Uber's traffic fatality and interpret it as anything but negative for Waymo and the other driverless companies.
Wait, are you talking about the person that litterally stepped in front of a driverless car and was killed? Because I can’t see how that would’ve been any different with a human behind the wheel.
I'm truly not sure if you're insidiously doing this on purpose, but that seems to be an understatement and mischaracterization of what's going on here. These are targeted acts, more akin to civil disobedience than vandalism. The way you phrase it seems like you're trying to plant a seed that distracts from the point.
I try to assume positive intent, but this really seems like a spin job. Could you perhaps elaborate?
Nobody disagrees that it is vandalism. The darkerside's comment was clearly referring to the characterization of these incidents as:
> act of random vandalism or hooliganism
darkerside correctly corrects that:
> These are targeted acts
That should be obvious to anybody that read the article, which contained several statements form the people involved explaining why they vandalized the cars. Calling it "random" acts of vandalism is an extreme misrepresentation of what happened. The vandalism was done for clearly stated reasons.
I am not a lawyer but I would think that if the cars are moving in traffic during the criminal act I suspect that the criminals would get a higher charge (criminal endangerment?)
> In some of their reports, police officers also said Waymo was often unwilling to provide video of the attacks. In one case, a Waymo employee told the police they would need a warrant to obtain video recorded by the company’s vehicles.
Veering off topic a bit here, but this is a very good decision by the company. Taking a hard line here removes the subjectivity of when to violate your customer's privacy and puts the onus on the police to choose to make that violation.
I'm not sure if they really care about privacy as much as they do arse-coverage. The sentence before that one:
>The emergency drivers in the Waymo vans that were attacked in various cases told the Chandler police that the company preferred not to pursue prosecution of the assailants.
If I didn't know any better, this statement reads as if the emergency driver doesn't exist to Waymo. One could argue that they aren't the target, but nowhere else can I think of someone attacking a vehicle because of the vehicle, and not the one "not driving" it.
It doesn't matter why the emergency drivers attacked.
It's a hubris of Google Management that they think that they should be the ones who decide whether the victim (the driver) should pursue prosecution or not, and the company should help the driver in his decision, whatever it is.
It was very sad to see that the forced arbitration case was handled only for sexual offences, Google management has too much power over employees when they are assaulted.
Neither Google nor the driver gets to make that choice. The police decide whether to refer the incident for prosecution and the prosecutor decides whether to pursue it. Google saying they don’t want to pursue it carries only as much weight as the police and prosecutor give it.
If Google refuses to provide video and sensor data, it becomes difficult to impossible to prosecute.
If they do this in some but not all cases, that lends to suspicion that the denial cases could show egregious misconduct on the part of the Waymo software (I.e. avoiding self incrimination).
Unclear. Is the company actually forbidding or "strongly discouraging" drivers from pressing charges, or is it just declining from pressing charges itself?
I don't really care about the reason. If their fear of case-by-case liability/PR aligns with my fear of case-by-case government data sharing it's a win for both of us.
Waymo is caught between a rock and a hard place. If they hand the car's video to law enforcement without a warrant, lots of people will worry about the privacy implications and the precedent it sets. It will also strengthen the resolve of the luddites by allowing them to claim persecution. Right now, the problem is limited to a few wackos. If a multi-billion dollar company ruins the lives of some photogenic families (even if it's legally and morally justified), public sentiment could change.
Waymo doesn't want publicity. That could lead to copycats and political polarization. Most people in the US don't even know about these tests, and I'm not sure they'd allow them if they did know. If an interest group (or god forbid, a political party) comes out against self-driving cars, rollout could be delayed by years. That would mean hundreds of thousands of more lives lost due to human drivers. The quieter Waymo can keep this (and the more they avoid looking like a big bad corporation), the sooner we get self-driving cars.
Maybe it's your civic duty as a person based on your ideals, but not as a company's technology lest we think the speed radar companies are bad citizens. Crime has levels, reporting has levels, and to treat both as absolute sans nuance is unrealistic.
It is interesting how Waymo chose not to pursue charges in a few cases where they had footage of the assailant. They thought it would further antagonize people in the community of making it an us vs them issue, with court cases, lawyers and publicity ensuing. But if charges are not pursued, it might also increase the attacks...
There are already road-rage issues between human drivers, as cars are perceived to dehumanize other side "me vs car". In this case this just gets amplified. A few tricks might help here, for example, making the car look less a "Waymo" car and just make it like a normal car. It might be hard to hide the camera and sensors completely. In that case maybe make a random logo, say Acme Surveyors, inc.
Another trick could be to have the test drivers pretend they are driving the car rather than sit and watch. They could keep their hands on the wheel and pretend to move it, so make it seem like they it's just a human driver.
Perhaps more importantly, data from the car becomes evidence, leading to public scrutiny of how the cars operate. And a court case could raise the question: "Are these cars being operated legally?"
Say there's a community (or demographic, whatever) that feels wronged by what a company is doing. You suggest the company use all kinds of deceit against that community to keep doing what it's doing.
To me that sounds like a pretty good way to get the opposite effect: more support for those opposed to the company.
> You suggest the company use all kinds of deceit against that community to keep doing what it's doing.
Citizens attacking the cars with knives, blocking the intersection on purpose, trying to run them off the road is just unsafe and counter-productive.
I don't see it a lot different than road rage. Is it ok to trick people to be less susceptible to road rage somehow. Or how about tricking delivery drivers into driving safer by putting one of those "how is my driving, call 1-800-xxx-xxxx" signs on the back of the car?
"It is interesting how Waymo chose not to pursue charges in a few cases where they had footage of the assailant"
...this is like a victim of a home invasion telling the cops that its ok nothings wrong, because if the cops dig deeper they will find the depths of depravity.
Does any one here know the full extent of data collection that waymo uses? does google know for sure that there are no backdoors in hardware or code?
DOES anyone know what is being hidden from scrutiny?
I'm all for driverless cars, but if you put an algorithm in charge then the creators of that algorithm need to be legally responsible for the outcome. I'm talking engineers going to prison when one of these cars does inevitibly hit someone. Every driver is liable for their actions on the road, for some reason SV thinks being "statistically better than humans" means they aren't responsible for however small a % of crashes they are responsible for. It's like saying I only crashed once in my 50 years of driving so on average I'm not responsible for killing that kid last Tuesday.
In what universe would it make more sense to make the engineer liable rather than the larger corporate entity? Also, that’s not how human driving works. If you hit someone while following the rule of the road and not doing anything especially negligent, you won’t face criminal charges. You use the word “inevitably” but it doesn’t make sense at all with your world view.
If, in the view of a court of law, it was not the algorithms fault due to negligence or whatever then I agree they are not responsible. I only ask the same rules be applied to the machine as a human. As for who ultimately pays, fine, substitute CEO or whatever. As long as someone actually does time, and it's not just a fine, because you can't put a company in prison. If we were talking about civil engineers designing a bridge that causes a death, I'm sure that chief engineers face at least some personal liability regardless of whatever shield the company offers.
Fortunately, the law is actually quite sane in this area, so there won't be any engineers going to prison for vehicular manslaughter:
Manslaughter is the unlawful killing of a human being without malice. It is of three kinds:
(c) Vehicular—
(1) Except as provided in subdivision (a) of Section 191.5, driving a vehicle in the commission of an unlawful act, not amounting to a felony, and with gross negligence; or driving a vehicle in the commission of a lawful act which might produce death, in an unlawful manner, and with gross negligence.
(2) Driving a vehicle in the commission of an unlawful act, not amounting to a felony, but without gross negligence; or driving a vehicle in the commission of a lawful act which might produce death, in an unlawful manner, but without gross negligence.
(3) Driving a vehicle in connection with a violation of paragraph (3) of subdivision (a) of Section 550, where the vehicular collision or vehicular accident was knowingly caused for financial gain and proximately resulted in the death of any person. This paragraph does not prevent prosecution of a defendant for the crime of murder. [0]
People are responsible for their own actions and that doesn't change here. It's not different from any other component failure in automobiles or aerospace. When an autopilot fails, they don't go out and find the people who contributed code to it and charge them all with hundreds of murders. They might jail someone if they made a willfully negligent decision with regards to testing or approvals but that person is just as likely to be in business or product as to be an engineer.
Silicon Valley doesn't think anything different from that and doesn't expect to be treated any differently. Jailing engineers would be the different, weird thing.
If such a system existed, we would basically have no technology. Engineers, like all humans, are fallible. Those mistakes can cost lives. If an engineer made an error in designing a part for a piloted car that caused the car to fail and kill people, should he go to jail too? Mistakes like this happen all the time. Ever had your car recalled?
Engineers can and should go to jail for negligence but that is an entirely different standard than criminalizing mistakes.
The inverse is true as well. If the tech progresses where it's statistically safer then the engineer has saved more lives. Depends on how you frame it. Idk how it should be sorted out legally.
Vandalising cars or causing threats of violence should never be condoned, but their still is something very wrong in the way tech companies encroach on the lives of people unwillingly.
>“They didn’t ask us if we wanted to be part of their beta test,” added his wife, who helps run the business.
The woman in the article is correct in my opinion. It ought to be up to the people of Arizona to decide what happens on Arizona's streets, not to a company from California, because it's the people of Arizona that are the receiving end of all the social, economic, and safety implications of this technology.
We have had this trend of tech companies creating facts and asking democratic permission afterwards a few times now in the world of social media apps, but now with autonomous driving things are getting a little more physical.
Yeah, it's not possible that they considered the issue delicately and decided something that disagrees with this person's opinion.
It was definitely money.
Exactly. Allowing campaign contributions and failing to punish lobbying and post-term kickbacks fundamentally undermines possibility of a government actually representing it's citizens.
I think folks like you are going to be super super disappointed when we succeed in passing campaign finance reform (which i think we will!), and roughly nothing changes.
My guess is that you'll go back to just blaming everyone else being ignorant for them not agreeing with you.
I don't know AZ very well, but knowing a bit about how government and organizations work, I'm inclined to suspect you have good reason to be suspicious. That said, examples would strengthen your case more than sarcasm.
If they have a problem with it, they should take it up with the state government of Arizona
Passing the buck is a very SV-centric response to importnat issue.
Like Google's response from the CCC the other day saying, "Yeah, we track people in bad ways, but everyone else does it too, so it's OK."
"Oh, our self-driving car isn't responsible for killing a pedestrian. It's the governor of Arizona who allowed us to kill the pedestrian who is to blame. Go after him! We're only responsible to shareholders, and the kid wasn't a shareholder."
> Passing the buck is a very SV-centric response to importnat issue.
Eschewing responsibility is a very SV-targeted response to an important local issue. I'd rather these decisions be made at the local government level instead of having outliers in the community (i.e. your neighbors) appeal to higher authority usurping democracy. Especially when those squeaky wheels are throwing rocks, claiming the company others asked to have in town are passing the buck, etc.
To be more precise, this was a decision made by the governor's office which essentially allowed these companies to act with very little regulation or oversight.
I think it's fair to say that the governor's office is less representative of (capital P) the People than, say, the state legislature or even a ballot measure. So yes, Arizona's elected government was involved, but that government opted to be as minimally involved as they could possibly be.
Personally, I would feel much happier with some dedicated oversight and some levers to pull if companies appear to be doing anything besides making the safety of Arizonans their number one priority.
Especially when that Governor was regularly offered the use of that company's offices for meetings when he was in California, or their corporate apartments...
I'm not American so I cannot personally vote in Arizona, but I think the problem in reality is quite a bit larger. It's not just the businesses of course but also local authorities who often will sideline the interests of their constituents if large, well financed tech companies promise jobs, innovation or just engage in plain old lobbying.
The same discussion seems to have happened around the Amazon HQ, with politicians essentially groveling in front of Amazon.
How much in the loop are those very people addressed in the article here really? Is their voice being heard? This is a lot more complicated than just pointing me to a voting website. Technology companies and authorities both seem to largely ignore seeking the genuine consent of the people who are at the receiving end of their experiments.
Unfortunate, but I guess with any new tech there will always be luddites.
Many HN folks probably already know the etymology, but for those that aren't familiar, the word "luddite" itself comes from "English textile workers in the 19th century [who] destroyed textile machinery as a form of protest" [1]
Unfortunately, there is a common misconception that Luddites were against technology. That's incorrect: they were skilled textile workers that used technology similar to any other skilled profession. From the first paragraph of the wiki page:
> The group was protesting the use of machinery in a "fraudulent and deceitful manner" to get around standard labour practices.
There is a lot of similarity with modern tech companies (Uber is the canonical example) working around labor in a way that concentrates wealth. As an early form of labor fighting back against capital abusing the amplifying effect of technology, I agree with the Luddites. Learn from them, or - as described in this article - we will see their violent methods again.
“They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”
It's not a misconception. I know a lot of authors have argued, correctly, that there was some nuance to the Luddites arguments, but the fact remains that they did in fact behave exactly as was described: there was a technology that they comitted acts of terrorism against because they prefered the old ways and attempted to slow that progress. People are now throwing in these corrections every time someone mentions luddites even though nothing they said was wrong, simply non-sympathetic. Those aren't the same things. Not agreeing that they should be as iconically maligned as they are isn't the same thing as a common misconception. None of the core fact are in dispute, you just want to defend them while ignoring the central point of it's futility.
Does it qualify as justified when it amounts to essentially stealing from everyone else in increased costs at decreased productivity enforced via violence and the threat of it?
I mean the South attacked Fort Sumter because they might eventually have slavery phased out. Just because they were accustomed to it doesn't mean they are justified to it forever.
You are confusing yourself. Zoom all the way out. Look at automation in its ultimate, final and inevitable form: machinery that can do anything a human can. If you don’t think AGI is possible then the discussion ends here. The only question that needs to be addressed is whether or not humans can live well in the presence of that technology. Economically, what would the effect be? When the internet came, phone books disappeared. This was predicted. It’s economics, not a question of technology. So what are your predictions for when humans are no longer the exclusive source of intelligent signal processing, which we have been for all of history? If you look at this question honestly and carefully then the answer you find will not make you an advocate for ai and automation. Not unless you hate yourself and mankind.
> Yes, that's correct. Why did they "prefer the old ways"?
This is precisely what I was critizing. The statement isn't wrong, you just want to justify it. You're not correcting anything, you're defending it. You're tacking a why question onto it when all anyone said was what. They were against the new technological improvements characeristic of that time and place. That's not a common misconception simply because they had rational arguments to justify themselves.
Macroeconomic forces are complex. Technology has wiped out entire types of work many times over the years, and the workers usually, though not always, make out all right. The problem is not any particular technological advance IMO, but overloading the economic mechanisms that allow people to find new careers. Too much technological advance too fast plus too many industries shipped overseas where labor is cheaper in an environment without enough new job development equals a permanent underclass, with all of the economic tensions that come along with it.
Meanwhile, UBI has many serious problems and we don't have anywhere near the level of job disruption to make it a workable solution.
Your entire viewpoint is based on the idea that there will always be jobs no matter how advanced ai and automation becomes. This is the idea that needs to be addressed, not the shrubbery surrounding it. It’s meaningless to say that new jobs have always been found and that rapid advancement in technology has not seemed to doom us so far. That’s like saying we haven’t run out of oil yet so why would we ever run out. The number of tasks that a human can do is finite. When machines saturate that set of tasks, something really bad will happen. But even long before, there will be massive problems as we approach it.
I already know you won’t be convinced by what I’m saying. Just respond with your strongest counter-argument to my central point about the finite nature of human jobs, or in other words what will happen when machines can do everything we can. Just give me your strongest counter-argument about that. I will then explain why that’s wrong, and we can continue until you see that you’re wrong. I will be extremely patient about it.
If you don't want to be banned on HN, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.
Truck driver is one of the most common jobs in the US and the initial beneficiaries of self-driving cars and trucks—financially and practically—will be those in the upper income brackets, so it’s likely that there will be a perception of inequality about the technology until there is some kind of correcting force.
Sure, but it's not a misconception to call someone who doesn't like new technology a luddite. The word has been simplified, even if it used to be more complex.
> working around labor in a way that concentrates wealth
Labor used to wash our clothes. Then washing machines came along and now there are technicians of washing machines. The technicians are much fewer than the amount required to wash clothes by hand and they likely get paid a lot more. So washing machines usurp labor and concentrate wealth.
Moving the responsibility away from humans for accidents to a faceless corporation where we can all anticipate no one will be sufficiently accountable means we would have no more justice, which may be more important to us than safety. After all, we're humans with emotions, not robots with spreadsheets.
We may not actually want safer. We may actually want more just.
With 37,000 auto deaths last year in the U.S., what if Waymo perfected the self-driving car, such that they killed no one, but instead senseless slaughtered 1000 random people on the streets. Would they be justified?
Of course not, because even though they would have made it safer, they would have made it less just.
Would that actually qualify as more just? They are deciding that more people must die just because they don't want to give up their illusion of control. Since you can die just as well due to a driver legally in the right in an accident with nothing you can do to stop it.
Just because it may be consensus doesn't make it remotely rational or right. I mean mobs lynching every suspect would ensure more people are held accountable but mean more innocents would die as well.
You might want to take into account that severe punishment for car accidents is the exception, reserved to the most egregious cases. You are unlikely to get justice when a human driver kills, unless they are drunk or try to flee.
As for the hypothetical, I would definitely take the senseless, unpunished death of one close family member over the properly punished deaths of 37 close family members. But I agree that less extreme numbers could put me in doubt.
it is worth thinking more deeply about how insurance mediates justice.
Having personally been in an accident a while back, I feel a sense of justice from the person who struck my car, at least with respect to the finding of their fault via police investigation. I wonder how that changes if there were no driver responsible.
My sense of justice in this case arises from my belief that the other driver has confronted their mistake to a sufficient degree that it will improve their behavior to the standard I approve.
I believe with a corporation's software having caused damage, I would have little confidence that the damage they had inflicted would confront the corporation in such a way that it would improve its behavior. (Obvious anecdata from an emotionally-biased POV, but still worth it just to practice thinking about it!)
Unfortunate, but I guess with any new tech there will always be luddites.
And in the face of any significant protest against the dystopian future we are being dragged into -- there will always be those making ill-informed comparisons to past historical events.
Without exception, there's always a downside to everything. Familiar pitfalls are easier to cope with than unfamiliar ones.
It's often only well after the fact -- like after you are dead and gone in many cases -- that the full scope of the cost gets recognized.
In the course of her work, Marie Curie was exposed to high levels of radiation. She had no understanding of the danger involved. We have some idea now, but it probably wasn't well understood within her lifetime.
Now, we have standard protections for handling radioactive materials. But those protections were developed at the cost of lives and much suffering.
Progress involves "Eating of the fruit of the tree of knowledge of good and evil" and all that. (IIRC) In the Bible, this was the introduction of sin and the cause of the fall from grace. I generally think if the Bible as full of excellent metaphors for general principles of this sort.
(No, I'm not up for debating the Bible, actually.)
Unfortunate, but I guess with any new tech there will always be luddites.
Some programmers are some of the worst luddites out there! When compilers came in, there were programmers who insisted writing the machine language themselves was better. Then there are programmers who are just as bad because they're too much the opposite of luddites.
In the very early history of compilers it was not very hard to beat them by writing assembly by hand. Over time it has gotten harder and harder to beat the compiler, and now handwritten assembly survives only in the tightest inner loops where unique instructions are used.
In what way is this threatening their livelihood? I would call it neo-Luddism, in the broader sense that it’s driven by dislike and fear toward technology, but I wouldn’t assign Luddism.
The key quote for me: “They didn’t ask us if we wanted to be part of their beta test,” added his wife, who helps run the business.
I was ready to jump on the bandwagon of people hating new technology until I hit this section:
> The trouble started, the couple said, when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.
> “They said they need real-world examples, but I don’t want to be their real-world mistake,” said Mr. O’Polka, who runs his own company providing information technology to small businesses.
I can see the perspective where if you've experienced an issue like this you might see them as more than an idle threat.
Reading that I wondered whether the 10 year old might have been throwing rocks at the car -- the car was just trying to defend itself...
More seriously, cars of any sort are dangerous to children, self-driving or not, going on a crusade against self-driving cars for nearly hitting (or actually hitting) your boy makes as much sense as the same against the normal cars where that happens every day.
I would actually be very happy if all cars, human driven and self-driving alike, were outlawed from our city center, and forced to just drive and park around the periphery. Some European cities are starting to do this. It completely changes the nature of the streets, from dangerous and noisy places that make you feel uneasy to walk near, into places that are safe for everyone to enjoy.
The reason is that almost nobody wants to ban cars from the center of almost any American city. It turns out that being able to drive in cities is considered useful by many people.
You can stick around and speak up for your point of view, certainly, but unless you pick one of a very few places like the aforementioned NYC and Seattle, you've got a difficult case to make. You certainly won't achieve any traction in Arizona, where the events described in this article took place.
Thanks, that's helpful. But there're also some cities in the US that are starting to create pedestrian only streets, so I think I'll stay here and help that along.
I'm all for creating cities where there are no cars, it's a big long term interest of mine to live in such a city.
But I don't like the phrasing 'crusades against cars', because it makes it seem like cars are the bad guys. Cars are an incredibly important underpinning of society today, and we should remember that as we innovate certain parts of society to function better without them.
We definitely want cars, and we definitely like cars. But we also like city centers that don't have cars, because it seems like that part of society functions better without them.
One difference is that if a person driving a car runs over a kid, the person driving the car (ideally) gets punished.
If a self-driving car runs over a kid, what happens? A faceless tech company in California might possibly face a fine, and some bad press that fades away after a few days. The fine is probably less than the amount the company set aside for paying these kinds of fines during its "beta test" on real humans.
In some places (mostly Nordic countries I think) fines for driving offences are set at a proportion of your income. If that were done for driverless cars and treating the whole corporation as the driver it might make them pay a bit more attention.
I'm not going to hold my breath waiting for it though.
They're not going on a crusade against self-driving cars, but against the "beta testing" of self-driving cars in their neighborhood, without their approval. It says so in the article, although it's a bit buried. I can't really fault them for that.
The companies have a permit from the government authority having jurisdiction, do they not?
It’s patently unreasonable IMO to demand they secure individual permission anymore than Delta needs my individual permission to takeoff from the nearby airport.
That's not a fair comparison at all. You seem to be under the impression that the law is always up to date with technology and the times. There is no precedent for self driving cars, and a lot of law has to be developed and precedent established before things start making sense in this context.
I need not state this, but self driving cars pose a danger quite different from airplanes.
Waymo has governmental OK to operate, just as Delta has FAA approval to operate. AZ governor is trying to make Arizona a leader in the space.
If his constituents disagree with their collective civilization’s decision, they should take their words to him, not their rocks and knives to mete out vigilante justice.
I think a closer analogy would be if Delta was developing an experimental plane that was not certified for commercial use and they got permission to operate it over a residential area.
Technically correct but you'd understand if people who lived under the flight path might have some concerns, even more so if they weren't involved and/or asked as a part of the operational endorsement.
I guess I wasn't clear about my stance: I can't fault their grievances, but I can and do fault their methods. I'm pretty sure that obtaining the government permit didn't involve the constituency. There are ways of responding to that, without resorting to crime.
I know you're being facetious but it happens. People do it to make U turns, or because they're lost, or because they live there, or they're visiting somebody. And "zooming" is relative. Going 25mph doesn't seem that fast to the driver, but it's really dangerous when there are kids playing in the street.
I think there are some legitimate fears that are worth exploring. Safety and economic effects are not trivial matters to dismiss.
I also think that those concerned are doing themselves no favors here. Drunken shenanigans, brandishing of firearms and blunt weapons, and vandalism make it very easy to smear them as luddite cranks who should be sidelined as quickly as possible, even if the fears are legitimate.
The semantics of that word are endlessly talked about and debated. It’s completey irrelevant to anything and everything besides English literature people. It’s so frustrating to see so many people spinning their wheels on it, wasting energy that could be used for meaningful discussion. What is also frustrating is the use of that word to put people down. Anyone who thinks that automation is bad is a “Luddite.” Those “luddites” actually have an increasingly good point.
> “The behavior is causing the drivers to resume manual mode over the automated mode because of concerns about what the driver of the other vehicle may do,” Officer Johnson wrote.
Interesting that the drivers don't trust how the autonomous vehicles will handle these cases. In the case of someone actively being malicious, it seems more manual intervention will be desired. I can't see how this will work out with certain car companies touting their cars will not be equipped with steering wheels.
Why aren't there similar stories of people throwing rocks at Tesla vehicles?
Those vehicles are readily identifiable and have had safety issues from drivers using its cruise control as if it were autopilot. Moreover it's a goddamned battery on wheels which could put an enormous number of car service shops out of business. There are already laws in various states to keep Tesla out for that reason.
There could be a lot of reasons why someone throws rocks at a Waymo vehicle but also thinks Tesla is cool. Ludditism is not one of those reasons.
Except that they're opting for more passive agressive tactics as I'm sure that attacking a Tesla while someone is in it or the owner is near by would result in an immediate police report and charges pressed.
Why aren't there similar stories of people throwing rocks at Tesla vehicles?
Tesla vehicles are owned by individuals not a big SV company.
Plus, Tesla isn't currently a technology shift that will put a lot of people out of work and their self-driving is often touted in the media as not being actual self-driving and seems to only get the occupant of the cars killed so far. If a Tesla takes out a kid on the street you might see a change.
Someone spray-painted my wife’s Tesla while she was sitting at a stop light and ran away before she could believe it was happening. In a nice part of Silicon Valley. It happens.
The original Luddites didn’t hate all technology, just the tech (powered looms) that was putting them out of work. I suspect that’s behind every attack on technology.
That might have been out of hatred for people affluent enough to afford a Tesla, rather than a dislike of Teslas. Also many if not most vandals are motivated exclusively by the thrill of avoiding being caught.
> Why aren't there similar stories of people throwing rocks at Tesla vehicles?
Public perception, editor choice, etc. If we were to point out every bit of hypocrisy that is determined by what stories NYT runs vs what they don't, we'd have really long posts.
> There could be a lot of reasons why someone throws rocks at a Waymo vehicle but also thinks Tesla is cool. Ludditism is not one of those reasons.
True. But there can also be lots of reasons why people are very upset at Tesla's approach towards auto pilot but thinks Waymo is doing the right thing. Or think they are both wrong. The presence of the articles alone are not enough to determine sentiment especially at such small numbers. Which is why closed-picture articles aren't fair to use to paint a picture much larger than the individual actors.
Perhaps it's because Tesla's lane assist doesn't make unrealistic claims resulting in the deaths of uninvolved people. AFAIK the only deaths involving Tesla's autopilot were of the drivers themselves on highways.
> Resorting to violence because you’re not getting your way isn’t justified, it’s infantile.
By that logic nobody should ever fight for what they believe in. I'm not saying I agree with these people (or even that they are using the best tactics), but I absolutely believe people should take a stance for what they think is right.
Tesla's autopilot is just a lane assist, and similar features can be found on Honda's, Cadillacs, etc. From the videos I've seen, sure some people use them in neighbourhoods/suburbs, but it doesn't seem as useful. People who abuse them on freeways have been run into barriers and trucks before.
It's no where near the level of autonomy the Waymo cars are suppose to have. I can understand why people are upset considering how conservative the cars have to be; and the seemingly weird actions they often take. They don't seem to monitor other cars' turn signals nor do they really have a way to communicate to other vehicles at intersections like humans can with hands or facial expressions.
These vehicles will get vandalized even if the public accepts them. Great new graffiti medium owned by “Big Corp” that drives all over metro areas 24/7. Wonder how much companies will need to spend just to keep them from being covered in tags.
As if every aspect of our life is not shaped by "big corp"s. It is laughable actually. They should vandalize their tv, detergent box and cereals as well.
You don’t get famous vandalizing things in your home. You do get famous vandalizing things that move through metro areas 24/7. Look at NYC subways in the 70s and 80s. They needed to create entire cleaning facilities that operated nonstop to finally get the graffiti off the trains. The insides are still tagged up to a degree.
Spray paint is probably the easiest thing to clean. Wait for people to use etch bath on the windshields and windows of these vehicles. They’ll need to replace all the windows to get them looking right again.
Yes, of course violence and vandalism is never justified, but think of it from a bored high schoolers perspective. It would be so much fun in the future to stop a waymo in the middle of an street, cover its cameras, and watch the aftermath. Pranks involving technology are gonna be way more fun in the future.
Fear of change and new things. People were opposed to cars when they were first introduced. Ironically self-driving cars will arguably save countless lives. The resistance and fear will pass. But I do love driving...
In the same way you can go horseback riding today on dedicated tracks and in areas under the ownership of the farm you will still have courses to manually drive cars on for a very long time after the general introduction of self driving vehicles.
And while its not directly comparable, I still see horse and buggy carriages on public roads in PA so they haven't ever outright banned that particular obsoleted mode of transport. If the Amish lobby can keep shit getting dropped in my streets I'm sure the collective car enthusiasts of the world will keep their right to drive for a while at least.
IT's the 'misuse'of tech. Sure, we can optimize for efficiency if efficiency is defined by minimizing cost or maximizing safety. But in a world of humans, efficiency should be defined by 'maximizing human utility'. In other words, use tech to make people better, more employable, happier. And no I do not mean borg. If we continue down this path, ultimately we will marginalize humanity because our systems are far more efficient than the evolutionary process that spawned and selected us. in the specific example of 'self driving vehicles' an analogy might be to have an AI supervisor issuing directives. Much as our GPS issues turn directions, an advanced AI could be pointing out traffic hazards hundreds of feet away that many people who are distracted, texting, talking or just have poor vision would miss. OTOH, we have the resilience of millenia on our resume, whereas machines are scarcely 200 years old. If a cataclysm hits the planet, much of the static infrastructure will fail.
The antitechnology Luddite movement will grow increasingly vocal and possibly resort to violence as these people become enraged over the emergence of new technologies that threaten traditional attitudes regarding the nature of human life (radical life extension, genetic engineering, cybernetics) and the supremacy of mankind (artificial intelligence). Though the Luddites might, at best, succeed in delaying the Singularity, the march of technology is irresistible and they will inevitably fail in keeping the world frozen at a fixed level of development.
On one hand, I am sympathetic to fears of driverless car safety and job losses. I would not be a big fan of having my (hypothetical) children playing on a street that self driving cars are being tested on.
On the other, if you threaten the employee within the car or attempt to drive them off the road, you should be prosecuted with the full force of the law. There are still people within those self driving cars, and attempted homicide is still a crime.
Edit: being opposed to attempted vehicular homicide is an unpopular opinion. Never saw that coming.
Another position to take against driverless cars (or rather, the proliferation and eventual obligation thereof) is the loss of freedom and control. Instead of the fault of human causing harm to another human, it's replaced with the fault of an opaque, massively complex algorithm; and that is rather unsettling to some people.
There's the potential for bad actors to plant dangerous training into these algorithms. Hypothetically, someone could turn an entire fleet of self-driving taxis into an assassination mechanism. Every vehicle can wait until a specific individual is spotted crossing the street and fail to stop at a light. The corporation that owns this taxi then will say there was a software failure with the one specific taxi, take it out of commission, and move on with business as usual. A few deaths a year is still less than the death toll of human-operated cars, after all.
Interesting question that I don't know the answer for, how many of these concerns were similarly raised with the rise of the car or the train before it?
OK, please offer an alternative vector for opposition besides litigation, which is expensive, slow, uncertain, and arguably rigged in many situations. For those who distrust or are cut off from institutional processes, what form of opposition do you recommend?
Organize a protest with signs. Petition the government. Like you mentioned, sue the companies. But ultimately if the government says it's legal (which it has and it is) and lawsuits and protests fail, you're going to have to accept it at some point. That's part of living in a civil society, compromises always have to be made. If everyone threatened violence for things they don't like, we wouldn't be able to call ourselves "civilized".
Pointing a gun at a person because you don't like the kind of car they're driving isn't exactly a workable solution.
...hey driver get outta the car and go take lunch, and heres a number for a tow truck when your done eating.
i would do the same in my village, as would most others who live here. its not about the jobs, its about the loss of privacy due to the incidental "meta as moby dick" mass surveilance that occurs
I think vandalism is generally bad, but at least it's far far less bad than harming people.
Politically I doubt that vandalism is a good way to stop self driving cars. Waymo can afford to replace damaged parts, and they could also smear their opponents as criminals, even if it's a minority of those concerned. This is the exact same playbook that those opposed to civil rights used, smearing protests as "riots". Playing directly into that hand is not a wise choice, IMHO.
> The emergency drivers in the Waymo vans that were attacked in various cases told the Chandler police that the company preferred not to pursue prosecution of the assailants.
This is part of the problem, IMO. Make it known that you will act lawfully to protect the safety of your drivers and will cooperate fully with law enforcement. This pacifist policy does nothing to discourage future attacks.
As others have pointed out, that's escalating. It forces the other party to back down or escalate. It forces the conflict into the public eye and undoubtedly costs more in bad publicity than damage to cars.
Are there any other examples from relatively recent history of new technology that people would spontaneously attempt to attack/destroy in an uncoordinated fashion like this?
Look at this very story. Erik and Elizabeth O’Polka driving dangerously on purpose, and all police has to offer is a 'warning'? It's even an off-hand joke in this NYT piece.
You are not hearing me. The theory is clear; the practice is that if you kill someone with your car and you are not DUI (no license is fine), you will be very likely not charged.
Probably not a good idea to damage property in protest. Maybe build something cheap and portable that should prevent them from moving. If they do move through the obstacle anyway, you've demonstrate how unsafe they can be.
> "People are lashing out justifiably," said Douglas Rushkoff, a media theorist at City University of New York and author of the book "Throwing Rocks at the Google Bus." He likened driverless cars to robotic incarnations of scabs — workers who refuse to join strikes or who take the place of those on strike.
I don't see how violence or vandalism of property is ever justified. Not to mention the fact that pelting self-driving cars with rocks or trying to run them off the road puts others in danger as well.
What I don't understand about these modern day Luddites is why what's considered today's technology is okay but any technological advances beyond this point is harmful. Why not attack washing machines and dishwashers? Surely forcing people to get their clothes washed by hands would create hundreds of thousands of jobs.
That doesn't really address the concerns in the article though - your quote is from a book about Google buses, a separate issue (but one mentioned in the article). One couple said their son was nearly hit by a car. Another person said the decision to test in their area felt unilateral.
This is pretty similar to how people reacted when cars showed up about a hundred years ago, and for good reason - drivers now kill half a million people a year, 30,000-40,000 of those in the US, disproportionately affecting people walking and cycling. We have evidence that self-driving cars are safer than human-driven cars, but we _also_ have evidence of "Tesla drivers pretending their half-assed version of self-driving is safe" and "self-driving car kills pedestrian" and no driver to face the consequence for this. More than a few of the touted designs for cities that self-driving cars enable are an utter hellscape for people who like walking, cycling, or motorcycling, and cars already destroyed our cities once.
Personally, I'd rather live in a place with no cars of any sort. Barring that, I'd actually prefer self-driving cars (they do seem safer) with massive restrictions on speed, noise, etc. But I can understand these people's concerns and just calling them Luddites doesn't really help.
> More than a few of the touted designs for cities that self-driving cars enable are an utter hellscape for people who like walking, cycling, or motorcycling, and cars already destroyed our cities once.
I don't follow. How are cities designed for self-driving cars hellscape? You admit that self-driving cars would likely lead to an objectively safer conditions for others, so what's the downside? They would also likely lead to higher utilization, more fuel efficiency and less traffic since much of traffic is due to human error. Protocols are also much easier to program and enforce than with human driven cars (e.g. people parking in bike lanes, double parking, stopping in unsafe zones to allow passengers to exit).
You're not going to get rid of cars. It's nice to say you prefer walking and biking and such, but the fact is that that's mostly a luxury of being wealthy and having the ability to live in a location where you can access your place of work and other facilities easily. If you're living in a rural area or a less expensive suburb of a large city, you rely on other forms of transportation such as a car or public mode of transportation (e.g. bus or subway). There are also those that have mobility issues that would be greatly inconvenienced by your insistence of a car free world.
Cities shouldn't be designed for cars. If anything, it should be the other way around.
Additionally, the idea of living directly in a city to be a luxury really destroys the justification of a "city" in the first place, in my opinion. The whole idea of suburbs throughout human history was the luxury of not being in a city, but thanks to that mentality + the Baby Boom, modern cities are now glorified business parks.
> Cities shouldn't be designed for cars. If anything, it should be the other way around.
Sounds like a perfect being the enemy of the good issue. Whether or not cities are designed for cars is separate from 'should the cars be self-driving'. That sort of hairshirt anti-harm reduction has a poor track record.
> The whole idea of suburbs throughout human history was the luxury of not being in a city, but thanks to that mentality + the Baby Boom, modern cities are now glorified business parks.
That isn't true actually historically it was the reverse where the outskirts of the city were where the poor went, especially when it meant being outside of the protection of city walls! We are just seeing the trend reverse itself.
Engineers seem to be promoting the idea that cars can use communication to replace the signals drivers get from their enivironment. https://spectrum.ieee.org/cars-that-think/transportation/sel... mentions bombing through intersections at full speed for instance. Make that the norm and eventually cycling or walking will be outlawed (or too deadly to try).
I get your point and have even gone and stood on paint to protect bike lanes. But I lack any faith in lawmakers to care about any mode other than driving.
>The trouble started, the couple said, when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.
I would be angry, too. The prospect of self-driving cars excites me, but I can totally understand not wanting your neighborhood to become the testing grounds for this new tech. Uber and Tesla's mishaps in the self-driving space understandably color the public's perception of Waymo's far more responsible efforts to date.
I'm very eager to get to the point where we have self driving cars, even though I'd be categorized as an enthusiastic driver (daily driver is a h-pattern 5-speed, racing experience including championship wins, etc.).
That said, if they were beta-testing in my neighborhood and actually hurt my kid or likely even a neighbors, they'd have a hard time keeping their cars running in the area.
"Move fast and break things" is only valid in a few limited contexts, and is particularly invalid in any potentially fatal context. I understand the need for realistic data, but, their industry has already killed someone beta testing (yes Uber. not Google, but...)
I don't see how violence or vandalism of property is ever justified.
Under any circumstances, ever? That seems rather extreme.
Anyway, people aren't arguing against technology here, but on an unwanted externality (extra traffic) with no persons around to provide accountability or mitigate the negative aspects with positive ones (like stopping for lunch to cause money to circulate longer in the longer economy). Centralized automation siphons money away in small quantities from many places and leads to a glut of it elsewhere, driving up prices and rents there.
Please quit with the Luddite arguments, which are a misleading diversion from this well-defined economic issue.
People who are scared or angry do not dwell on justification.
Political and corporate brands know this very well — we are not at all rational beings and any approach based on only “making a logical case for...” is insufficient. We believe we’re rational (the premises of the enlightenment) but are not.
Also, please consider how many people are employed in transportation and see themselves broke and homeless before you compare self driving cars to washing machines. Housewives were never paid to exclusively and only wash clothes. Completely different scenario.
These are people who, like luddites, see the end of their lives as they know it.
From a comment below, on Luddites:
> The group was protesting the use of machinery in a "fraudulent and deceitful manner" to get around standard labour practices.
> Also, please consider how many people are employed in transportation and see themselves broke and homeless ...
> These are people who, like luddites, see the end of their lives as they know it.
I agree it's a different scenario but how much of a fraction is that? I doubt it's very large. Literally millions of new employees in this field have been created thanks to companies like Uber, I doubt most were in the transportation business beforehand and had jobs doing something else, and once driverless cars are more common most won't be in transportation afterwards either. Most people can find new jobs, and most people in the industry already are probably there as a consequence of "I need a new job" rather than being born and bred for this one thing they can do that a machine will now take over.
Considering I don't subscribe to such a simplistic philosophy that can't be the driving point of my questioning at all, your response makes no sense. Furthermore such a philosophy does not originate in Star Trek, SV is more individualistic and self-centered than most of the world (and I don't even live there anyway nor have any desire to), and life having value does not imply that all lives are the same value or that you can't perform addition on the values.
Needs are relative. I suspect society as a whole will be better off when no one loses a child to drunk drivers, asleep drivers, or elderly drivers who have impossibly slow reflexes due to age related factors. Self driving cars make those things a reality.
I don't think Uber means what you mean by the word employee, just like the case recently all over reddit where TomTom stopped "lifetime" updates to a two year old GPS unit.
And when Henry Ford made vehicles to the middle class, did all who sold horses and horse carriages forever go jobless? Nope. They did what most people did and retrained and got new jobs. More technology often creates higher paying (skilled) jobs, which people can be retrained to do.
The washing machine works for you. The self-driving car, even assuming they're for sale and you can afford one, fundamentally operates at the whim of its remote overlords. And you're not going to get run over by someone else's washing machine.
> I don't see how violence or vandalism of property is ever justified
I welcome your commitment to unilateral disarmament, end of gun ownership, disarmament of the police &c. /s
What if my upstairs neighbour's washing machine overflows with water and drips all over my home? My son nearly had his foot taken off by a drier that was bouncing about in my apartment's common area. We should ban driers.
I don't want any of my neighbours to have washing machines. I didn't agree to this. They should ask permission.
> Why not attack washing machines and dishwashers? Surely forcing people to get their clothes washed by hands would create hundreds of thousands of jobs.
Because "cars are taking jobs" isn't what is being protested here.
To be fair, we should separate what the protesters are protesting and what the reporter says is part of a general debate. NYT does not get to say why people are protesting, though it helps them drive a narrative if people reading the article can't tell the difference.
They deliberately conflate vandalism with general debate, and unfortunately it is then up to us to be smart enough to not lump them together like they did. The formula is usually to take shocking story/reaction, humanize it at a micro level, talk to so-called expert who may have written a book, have expert give general reasoning at a macro level, and then watch readers confuse micro and macro.
> The trouble started, the couple said, when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.
> “They said they need real-world examples, but I don’t want to be their real-world mistake,” said Mr. O’Polka, who runs his own company providing information technology to small businesses.
People protest for all sorts of reasons. bko's point is not relevant to every protester.
"why not attack washing machines and dishwashers?"
...thats going to happen with smart appliances eventualy, and the movement to desmart appliances is growing. it will keep electrical engineers in business developing smart actuation workarounds for appliances
sorry its not a sweet thing to hear, but the default coupling of intense data , and physical surveillance with smart products is a pitfall, and people are backlashing against it.
I am one of those people that circumvent smart appliances back to sane mode and i make a killing financially from it.
i think this might be shadow banned, but FWIW, i use PIC and ALU to bypass smart controlls at the hardware level so old men and women, who dont want smart appliances get a hardware version that is locally contained.
physically i fabricate a new control panel or console, manufacture the "digital over analogue" dials and keypads, and replace the smart hardware.
You're not hellbanned but sometimes you have to click on the timestamp of the comment to reply to it. Sounds like a great small business gig; there's probably also a market for people who had smart devices that became crippled when the parent company shut down.
yup there is its a good side gig, im a softie tho, so i take it easy on markup and billing...im barter friendly , as out here things and materials are nearly good as cash.
> Officer William Johnson of the Chandler Police Department described in a June report how the driver of a Chrysler PT Cruiser wove between lanes of traffic while taunting a Waymo van.
How do we know that it just wasn't the PT Cruiser's driver's normal driving behavior. They did buy a PT Cruiser in the first place.
I wonder what the algorithm says to do when a protestor steps in front of a speeding car that would have to brake hard enough to injure the occupants of the car in order to avoid hitting the protestor? Under a comparative fault regime, Google and the protestor would both be partially at fault for the occupants injuries. The protestor is broke, so Google will bare the whole burden of being jointly and severally liable. Or should it make a calculation of how badly it's going to injure the occupant and the protestor and minimize the situation in terms of damage to the company, thus possibly choosing to injure the protestor over the occupants to preserve brand value and future revenue?
I understand why Waymo is proceeding cautiously. But you really get the impression that the technology has a long ways to go. People in Chandler are pissed because this fleet of cars is a huge nuisance and is in fact less safe for human drivers who are sharing the road. I have no doubt that autonomous vehicles will be safer than human drivers someday, but that day is definitely not today.