I never claimed anything else. My position on the matter has always been that AVs still have a very long way to go to reach the level of safety that humans do. From this position, I have expressed doubts over the feasibility and viability of
such technology maturing enough.
What I think many in here fail to understand is that AVs aren't being developed because someone deemed humans needed to be saved from the 3000 deaths per day. It is being done because this technology poses a lucrative business. And with any business - in the end, it comes down to money and profits. In other words, don't expect the high-end version of anything to make it into consumer products. In the end, it'll be a compromise of sorts. At what cost in regards to safety? I find the thought quite frankly scary that we might have thousands of 'bots' roaming the roads. If the software has a specific flaw, it will be present in every single car. Example; the software has a bug and fails to identify certain objects and simply ignores them. Imagine the chaos if enough of these things are driving around? And before you bring up rigorous safety tests and standards... at what cost? Do you honestly think every bug can be found in software that has to be able to be that complex? Just look at the 737-Max incident... one wonders, how on earth could something like this happen and be allowed to fly in an industry with perhaps the most rigorous safety standards there are?
And once we assume AVs do come, do you really think they will replace manual cars in the hundred poorer countries where a majority of these 3000 deaths per day happen? Who will pay for that? Who will come up with the money for the required infrastructure there? Do you really think AVs will be the best and most efficient solution to save those lives?
Any car, autonomous or not, does not have instant braking power. Simply 'reacting' to unpredictable circumstances is not always enough, hence why we are being taught to be pro-active on the road, drive with foresight and the recommended safety margins. With how much foresight a 'software' could ever hope to react to something about to happen depends on its ability to properly identify and assess what is happening - and some of these things require years of experience. Again; autonomous software has zero intelligence - it only knows what it has been programmed to.