Autonomous Cars

Breaking news, useful data or technical highlights or vehicles that are not meant to race. You can post commercial vehicle news or developments here.
Please post topics on racing variants in "other racing categories".
User avatar
Big Tea
99
Joined: 24 Dec 2017, 20:57

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 02:17
subcritical71 wrote:
09 Nov 2018, 00:57
Phil wrote:
09 Nov 2018, 00:21
And this is an argument for AVs how exactly?

If anything, the fact that the “test driver” felt he needed to intervene to be quite telling about how mature the tech is and how much faith he put into his cars “A.I.”.

Also, one wonders how well the stats of the pro AV PR machine backs up. “No accidents in how many million miles of driving”? Sure. Are there also statistics how many times the test-driver needed to intervene like in the above case? I’m guessing not.
For over-regulation California takes first place, but some of these regulations are good. In the AV space manufacturers are required to report just the data you mention. These are for manufacturers whom test their vehicles on California roads.
I think you misread what i was asking for. I did not ask for statistics when an accident/collision took place, who was at fault.

I was specifically asking who keeps note on how many times a test-driver intervened to prevent a collision from happening.

Obviously, that these “interventions” exist, is denonstrated by the article above when there was an actual collision because of it. Obviously, if there is a collision, you cant hide it and there is obviously an investigation of sorts, as the links you supplied nicely demonstrate. What about the cases where nothing happened? How are these logged? I would think that data is just as relevant in determining how safe these AVs would be without supervision, wouldnt you agree?
This could surely be somewhat of a double edged sword. Who is to say that the intervention was really necessary and that it may have added to the danger of the situation?

Had the driver not intervened, would it have led to an accident, and not only that, but did the driver intervening increase the chances of an accident other than the one perceived by that driver.

The automatics may have acted, or not acted with far more information and spatial awareness than the driver had at that instant and the driver may well have chose n the less safe option of doing something as a reaction rather than allowing the system to react,or not react in a pre meditated manner.?
When arguing with a fool, be sure the other person is not doing the same thing.

User avatar
subcritical71
90
Joined: 17 Jul 2018, 20:04
Location: USA-Florida

Re: Autonomous Cars

Post

strad wrote:
09 Nov 2018, 01:50
That one looks to be the result of shitty maintenance practices more than an automation problem
Nope.. Boeing has warned pilots and there are lots of other cases...Just they didn't kill over a hundred people.
As for the aviation comparison just a few posts back I was told how planes had triple redundancy and were safer than A/Vs.. :lol:
What about the computer controlled rudder problems?
You can't just brush aside stuff that doesn't fit with your argument. :wink:
some of you take this thread too serious. :wink:
I just read the FAA directive (http://rgl.faa.gov/Regulatory_and_Guida ... rgency.pdf), from my take a few things. I wouldn’t want to be one of the previous flights flight crew that did not report the erratic AOA indicator! That simple act could have prevented this accident.
Next, it seems the pilots may not have understood that the flight computers will maintain pitch trim authority even while in manual mode (and therefore the failure of the AOA sensor will command pitch trim in 10 sec increments, in this case pitching down). Even if they were in the clouds in instrument conditions the pilots should have recognized this quite frankly, but if they were not properly trained then how would they know and with alarms going off I’m sure there was a bit of confusion. I think that is the real reason for the ED, to get that training out there on that 737 model, as the previous generations did not behave that way. So it wasn’t AV failure as much as it was not understanding the actions needing to be taken.

It may seem off topic, but it’s actually relevant to this discussion. I believe it was Phil who mentioned programmers can’t take into account everything. While I believe that is true I also believe they can add code which may actually confuse a driver (pilot), even a proficient one. As I believe is probably the case in the 737 crash. What are the chances of a driver wanting to understand all the different modes of AV operation, how to detect failures, and more importantly how to respond before an accident occurs?

User avatar
henry
324
Joined: 23 Feb 2004, 20:49
Location: England

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 00:21
And this is an argument for AVs how exactly?

If anything, the fact that the “test driver” felt he needed to intervene to be quite telling about how mature the tech is and how much faith he put into his cars “A.I.”.

Also, one wonders how well the stats of the pro AV PR machine backs up. “No accidents in how many million miles of driving”? Sure. Are there also statistics how many times the test-driver needed to intervene like in the above case? I’m guessing not.
I think you are misinterpreting the reports. The driver had a less complete set of inputs than the car and chose to make a manoeuvre that put a vehicle behind in danger. The “car” new of the presence of the motorcycle and would not have made the lane change. It demonstrates the opposite to your supposition. Autonomous cars have the opportunity to have more and better sensors than the average human and more experience than that same average performer. This incident demonstrates a crossover point.

On reported incidents they are indeed published. The last set I saw, about a year ago, showed Waymo way ahead at about one intervention per 500 miles. Others were much less good with the worst at interventions per mile. I can’t recall which doc I consulted for this.
Fortune favours the prepared; she has no favourites and takes no sides.
Truth is confirmed by inspection and delay; falsehood by haste and uncertainty : Tacitus

User avatar
Phil
66
Joined: 25 Sep 2012, 16:22
Contact:

Re: Autonomous Cars

Post

Now, you're just speculating. You are assuming, the car "knew" what the best course of action was. You are also assuming the 'test-driver' had less information available, despite sitting in a car with more than a few screens showing him all that information the car actually "saw". Yet, the fact remains, an unpredictable situation unfolded and at some point, he felt he needed to intervene.

Why did he feel he needed to intervene? The fact he did, speaks volumes.

henry wrote:The last set I saw, about a year ago, showed Waymo way ahead at about one intervention per 500 miles.
Is this supposed to be some impressive stat? One intervention per 500 miles? Because it isn't.

From the devils mouth (thanks subcritical71):

"Our test drivers routinely transition into and out of autonomous mode many times throughout the day, and the self-driving vehicle’s computer hands over control to the driver in many situations that do not involve a failure of the autonomous technology and do not require an immediate takeover of control by the driver."

-- so the cars are not exactly autonomous the entire day.

"This report covers disengagements following the California DMV definition, which means “a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” Section 227.46 of Article 3.7 (Autonomous Vehicles) of Title 13, Division 1, Chapter 1, California Code of Regulations."

The numbers:

Dec 2016: 11
Jan 2017: 7
Feb 2017: 4
Mar 2017: 4
Apr 2017: 10
May 2017: 5
Jun 2017: 6
Jul 2017: 3
Aug 2017: 3
Sep 2017: 6
Oct 2017: 3
Nov 2017: 1

Total 63 times across 352'544 miles. That is roughly every 5600 miles where the 'autonomous mode' was (needed to be?) turned off.

It all sounds very impressive when one reads reports that WAYMO has completed millions of miles without one single incident/accident and is on average supposedly safer than humans (on average), but when you come to think of it, these cars are still driven in somewhat predictable roads and intervention by the employed test-driver happens way more frequently than one would have thought.

I'm truthfully baffled to read you think we are at some cross-over point.
Not for nothing, Rosberg's Championship is the only thing that lends credibility to Hamilton's recent success. Otherwise, he'd just be the guy who's had the best car. — bhall II
#Team44 supporter

AJI
AJI
27
Joined: 22 Dec 2015, 09:08

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 11:31
...
Why did he feel he needed to intervene? The fact he did, speaks volumes.
...
Because a singular human is much smarter than anything or anyone in the universe? Help me out here, I'm struggling

User avatar
Andres125sx
166
Joined: 13 Aug 2013, 10:15
Location: Madrid, Spain

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 11:31
Now, you're just speculating. You are assuming, the car "knew" what the best course of action was. You are also assuming the 'test-driver' had less information available, despite sitting in a car with more than a few screens showing him all that information the car actually "saw". Yet, the fact remains, an unpredictable situation unfolded and at some point, he felt he needed to intervene.
He mistakently felt he needed to intervene Phil :wink:

Facts are:
a) An AV was driving by itself without problems
b) A human made a stupid move entering the lane where this AV was travelling
c) The human inside the AV pannicked, took control and caused an accident becasue he was not aware of the motorcycle passing him

So basically two human errors caused an accident, but you still think this is proving AV are dangerous? :wtf:

User avatar
Andres125sx
166
Joined: 13 Aug 2013, 10:15
Location: Madrid, Spain

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 00:21
If anything, the fact that the “test driver” felt he needed to intervene to be quite telling about how mature the tech is
Well, if he was a "test driver" the tech obviously can´t be mature, they´re testing it :roll: :lol:

And that was the cause of the accident, since it was a test model the driver didn´t rely on it, and by taking control he caused an accident.

We can´t know if the AV would have dodged both the car and bike, but we do know the human didn´t and caused an accident.... wich btw was the most dangerous for humans (hint for that debate about if AVs will make the better decisions for humans or not)

User avatar
Andres125sx
166
Joined: 13 Aug 2013, 10:15
Location: Madrid, Spain

Re: Autonomous Cars

Post

strad wrote:
09 Nov 2018, 00:36
Boeing 737 crashes because the anti stall sensor sends false information and the computerized rudder control has caused crashes and you think A/Vs will be safe?
Safe? Is there anything 100% safe Strad?

I don´t think they will be 100% safe, but I´m sure they will be safer than humans. Look at this video, and tell me what accident of these would have happened with an AV... I don´t see any of them as most accidents are caused by morons who can´t look at their surroundings before invading a lane. This surely will never happen with an AV as they can monitor the surroungind 360º constantly, without distractions, so only because of this, they will be several orders of magnitude safer.



I will insist Strad, you´re trying to evaluate AV vs humans from YOUR point of view, but you should do it from an average human driver point of view. The morons in the video are as human as you are, like it or not. :x

User avatar
henry
324
Joined: 23 Feb 2004, 20:49
Location: England

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 11:31
Now, you're just speculating. You are assuming, the car "knew" what the best course of action was. You are also assuming the 'test-driver' had less information available, despite sitting in a car with more than a few screens showing him all that information the car actually "saw". Yet, the fact remains, an unpredictable situation unfolded and at some point, he felt he needed to intervene.

Why did he feel he needed to intervene? The fact he did, speaks volumes.

henry wrote:The last set I saw, about a year ago, showed Waymo way ahead at about one intervention per 500 miles.
Is this supposed to be some impressive stat? One intervention per 500 miles? Because it isn't.

From the devils mouth (thanks subcritical71):

"Our test drivers routinely transition into and out of autonomous mode many times throughout the day, and the self-driving vehicle’s computer hands over control to the driver in many situations that do not involve a failure of the autonomous technology and do not require an immediate takeover of control by the driver."

-- so the cars are not exactly autonomous the entire day.

"This report covers disengagements following the California DMV definition, which means “a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” Section 227.46 of Article 3.7 (Autonomous Vehicles) of Title 13, Division 1, Chapter 1, California Code of Regulations."

The numbers:

Dec 2016: 11
Jan 2017: 7
Feb 2017: 4
Mar 2017: 4
Apr 2017: 10
May 2017: 5
Jun 2017: 6
Jul 2017: 3
Aug 2017: 3
Sep 2017: 6
Oct 2017: 3
Nov 2017: 1

Total 63 times across 352'544 miles. That is roughly every 5600 miles where the 'autonomous mode' was (needed to be?) turned off.

It all sounds very impressive when one reads reports that WAYMO has completed millions of miles without one single incident/accident and is on average supposedly safer than humans (on average), but when you come to think of it, these cars are still driven in somewhat predictable roads and intervention by the employed test-driver happens way more frequently than one would have thought.

I'm truthfully baffled to read you think we are at some cross-over point.
I’m basing my judgement on what Waymo reported. The car had more awareness of its environment and would not have made the move the driver made. I think you’re also speculating that the drivers have screens full of images produced by the vehicle sensors. Maybe they do maybe they don’t.

I misrembered the mileage as 500 not 5000. So probably about the same as me. About twice a year,I guess, another road user takes action to help me out. That’s the incidents I notice of course. However I am L5.

I don’t think this is a crossover point from human to AV in general. It’s more that this is the first reported instance of a situation where the human error can be judged against the likely behaviour of the car.

Your point on the restrictive nature of the environment the Waymo cars drive in is exactly the difference between L4 and L5. The Waymo cars have expensive sensor arrays and expensive processing. Waymo spend millions on mapping the road infrastructure and building the object data sets to train and test the software. Their cars operate on roads they know in weather conditions they can cope with. It’s a very expensive business. Google have deep pockets and are doing this because they want to make Waymo rides as ubiquitous as Google Searches. Others have likewise large resources, Baidu in China for instance. Others are less well off and are taking shortcuts, we hear quite a lot about them. Others are driven by ambition, Elon Musk’s desire to describe what Tesla are doing as “fully autonomous” is a risk to the whole industry.

I’m not a protagonist of Autonomous Vehicles. I can see the potential social benefits of them but I am concerned about the wider social impact of them and other AI enterprises. The capital juggernaut is not known for, or legally constrained to, look after the best interests of humanity in general. So I keep a watching eye.
Fortune favours the prepared; she has no favourites and takes no sides.
Truth is confirmed by inspection and delay; falsehood by haste and uncertainty : Tacitus

User avatar
Andres125sx
166
Joined: 13 Aug 2013, 10:15
Location: Madrid, Spain

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 02:17
What about the cases where nothing happened? How are these logged? I would think that data is just as relevant in determining how safe these AVs would be without supervision, wouldnt you agree?
Yes I agree with this. But since we´re comparing AV with humans, we should know that data for humans too to compare...

Obviously there´re no logs, but I´m sure there must be millions situations yearly were a human got distracted, invaded a lane or jumped a traffic light, but luckily nothing happened

User avatar
henry
324
Joined: 23 Feb 2004, 20:49
Location: England

Re: Autonomous Cars

Post

This article in Jalopnik brings forward a technology Inthought would be essential to the future of safe vehicles. https://jalopnik.com/the-future-of-safe ... 1830317629

The inability to see in the dark is a contributor to road accidents as shown by the spike as the clocks change in autumn (fall).

So thermal imaging looks like a good thing. However for humans it needs to be projected onto their field of view. Some expensive cars have IR as an option with IR lights, cameras and head up displays.

For AVs the challenge, I guess, is expense together with the collection and processing, classification, of data to identify and tag objects of interest. For L4 ride for hire vehicles this might be a worthwhile investment. It might increase their operating environment for weather and time of day and so increase their earning potential.

For driver aids it might be a useful blind spot tool, able to see cyclists alongside, and maybe even interlock with the door mechanism to stop people opening their doors into the paths of other road users.
Fortune favours the prepared; she has no favourites and takes no sides.
Truth is confirmed by inspection and delay; falsehood by haste and uncertainty : Tacitus

User avatar
Phil
66
Joined: 25 Sep 2012, 16:22
Contact:

Re: Autonomous Cars

Post

henry wrote:
09 Nov 2018, 12:36
I think you’re also speculating that the drivers have screens full of images produced by the vehicle sensors. Maybe they do maybe they don’t.
Perhaps you should watch some of the videos on the Waymo youtube channel. They give a pretty good insight on what the cars are able to illustrate on their screens. At the very least, a 3d representation of the car and surrounding road, visualising exactly what the car 'sees' and 'identifies'. Considering this technology is 'work in progress', it's reasonable to assume that the test-drivers are not only there for the ride and to take over when they must, but are also tasked with monitoring/observing the decision making of the car and on keeping a tight protocol in how well the car's perceived 'reality' stacks up to the real one.

So yes, I am going to assume the test-driver has a lot of information available to him.


AJI wrote:
09 Nov 2018, 11:38
Phil wrote:
09 Nov 2018, 11:31
...
Why did he feel he needed to intervene? The fact he did, speaks volumes.
...
Because a singular human is much smarter than anything or anyone in the universe? Help me out here, I'm struggling
No, because these test-drivers have a much better understanding of what these cars are capable of, where their (current) limits are and how they react/drive in the real world. They'd better. After all, they are 'experts' and have done how many million miles of mileage across how many years? If anyone knows how well these cars react to the unexpected, they do.

If you think there is any kind of "intelligence" driving these cars, you are sorely mistaken. It's just a piece of software coupled to some sensors doing what it was programmed to do. Encounter situation x, do y etc. Of course, having done millions of mileage on the road, the complexity of the "data" is growing to the point many daily situations are handled. But at the end of the day, it's still a piece of software and the software can only be as good as the sensors that feed it its information. And the software is only as good as its (programmed) ability to accurately read and interpret what is happening around it. Mistaking a cyclist for a motorbike could be dangerous. As would be mistaking a child for an adult. I am still yet to see that this technology is able to distinguish objects into more than a few categories like human, cyclist, motorbike, small-vehicle, large-vehicle etc (not that this isn't already very impressive -- the nerd in me speaking).

So yes, these test-drivers would have a very intimate knowledge and understanding of what these cars are actually capable of in reality. The fact that these 'interventions' take place quite frequently and in the above incident, the test-driver 'took control' does speak volumes, regardless if his actions made the incident worse or not.


henry wrote:
09 Nov 2018, 12:36
I’m basing my judgement on what Waymo reported.
henry wrote:
09 Nov 2018, 12:36
The Waymo cars have expensive sensor arrays and expensive processing. Waymo spend millions on mapping the road infrastructure and building the object data sets to train and test the software. Their cars operate on roads they know in weather conditions they can cope with. It’s a very expensive business. Google have deep pockets and are doing this because they want to make Waymo rides as ubiquitous as Google Searches. Others have likewise large resources, Baidu in China for instance. Others are less well off and are taking shortcuts, we hear quite a lot about them. Others are driven by ambition, Elon Musk’s desire to describe what Tesla are doing as “fully autonomous” is a risk to the whole industry.
You are putting a lot of faith into what Waymo (& Co.) is reporting to be absolutely transparent and 100.00% correct, considering the 'expensive processing', 'the spending of millions to mapping road infrastructure and object data', the countless of man-hours (I added this one). Yes, it really is undoubtedly an expensive business. One might even call it a multi billion venture. Now, I'm not going to go as far to suggest that the data we are receiving is manipulated. I fully trust the DMV to have very strict guidelines and laws as to what gets (publicly) monitored and logged - at least the part that can be monitored and does trickle down as far as the DMV that then ends up in the public domain. I do however question how far that PR machine is willing to go to make it seem that their cars are better than they are in reality.

In other words, if my self-build AV car could successfully drive up and down my residential area street 49893018 times, I could easily put out a headline claiming more than 39313 miles of accident free driving. It would be a correct statement. But it would also be a bit misleading, considering those 39313 miles are effectively on an empty road. Now again, I'm not saying that Waymo aren't driving on demanding roads. They are. But how impressive those faultless drives match up to reality very much depends on how many of those millions of miles were covered in demanding and unpredictable circumstances. We don't know that, you don't know that, there's no way to know that. The question is how much are you willing to trust into the PR machine?

It's a very very expensive venture these companies are setting out to do and quite frankly, bad publicity would be the last thing these companies can afford. It would be a major set-back and public perception might just be a little different. At the same time, these companies are competing against each other to come out as the first to have the most mature technology. Better to focus on what these cars do well than what they don't. It therefore doesn't hurt to be critical of what the headlines suggest.
Not for nothing, Rosberg's Championship is the only thing that lends credibility to Hamilton's recent success. Otherwise, he'd just be the guy who's had the best car. — bhall II
#Team44 supporter

User avatar
loner
16
Joined: 26 Feb 2016, 18:34

Re: Autonomous Cars

Post

well .. chinese treble attack :mrgreen:

Baidu's self-driving bus to hit the roads in China soon
https://news.cgtn.com/news/3d3d514d3463 ... are_p.html

China Shows Off Self-Steering Boat that Fires Missiles
https://www.defenseone.com/technology/2 ... es/152650/

China launches first AI news anchor on state media that mimics a real human
https://www.teslarati.com/china-artific ... or-xinhua/

the machines will kill us all soon :mrgreen:
para bellum.

User avatar
henry
324
Joined: 23 Feb 2004, 20:49
Location: England

Re: Autonomous Cars

Post

Phil wrote:
09 Nov 2018, 13:48

It's a very very expensive venture these companies are setting out to do and quite frankly, bad publicity would be the last thing these companies can afford. It would be a major set-back and public perception might just be a little different. At the same time, these companies are competing against each other to come out as the first to have the most mature technology. Better to focus on what these cars do well than what they don't. It therefore doesn't hurt to be critical of what the headlines suggest.
I can’t really disagree with the majority of what you say. However PR is just that. You decide how you want to relate to the public. You then tailor what you disclose, or expect to be known, to emphasise that position. Waymo appear to want to depict themselves as responsible, making carefull steps to their goal. Do they reveal everything, no.

Other organisations appear much less transparent, I wouldn’t trust Uber to tell me the time. As for an organisation that labels a driving assistance mode “Mad Max” oversell seems baked in.

I agree we should look at what they do well and criticise where warranted. For instance Tesla’s Navigate on Autopilot looks to have some useful features, but it’s a very long way from autonomous driving and Tesla’s approach to ensuring the driver pays attention, steering wheel sensing, seems less capable than say Cadillac’s approach with driver facing camera.
Fortune favours the prepared; she has no favourites and takes no sides.
Truth is confirmed by inspection and delay; falsehood by haste and uncertainty : Tacitus

User avatar
Big Tea
99
Joined: 24 Dec 2017, 20:57

Re: Autonomous Cars

Post

Andres125sx wrote:
09 Nov 2018, 12:49
Phil wrote:
09 Nov 2018, 02:17
What about the cases where nothing happened? How are these logged? I would think that data is just as relevant in determining how safe these AVs would be without supervision, wouldnt you agree?
Yes I agree with this. But since we´re comparing AV with humans, we should know that data for humans too to compare...

Obviously there´re no logs, but I´m sure there must be millions situations yearly were a human got distracted, invaded a lane or jumped a traffic light, but luckily nothing happened

My car has 'collision avoidance' or panic stop front sensor. I have done only 1500 miles (six weeks) and had it activate once. A car pulled in front of me, and it was a tenth of a second quicker than me as my foot was already on the peddle.
(I has beeped an alert 3 times, but no action was needed as the car infront was accelerating away.)

Had I been half a second slower responding it would indeed have avoided a collision. so to say the computer 'overrides me' once in 1500 miles gives prospective.
When arguing with a fool, be sure the other person is not doing the same thing.

Post Reply