Autonomous Autos – Still no cigar

Edge cases highlight the ongoing problem.

  • While the world is driving itself into a frenzy over the latest iteration of AI, the older version (over which everyone went crazy in 2017) continues to languish as it turns out that one needs to understand the road in order to drive it safely.
  • The latest publicised problems come from Cruise which was accused of stalling in the middle of the road blocking emergency service access to an incident and a Waymo vehicle running over and killing a dog a few weeks ago.
  • These are all examples of what the industry refers to as edge cases which are situations which the vehicle has not been explicitly trained to deal with and as a result is unable to work out what to do when they occur.
  • The problem is that these edge cases are not particularly rare but common enough that even the best-trained machines are unable to match the performance of humans and until they can, they will not be let loose on the roads in any meaningful way.
  • The problem boils down to machine vision which is the vehicle’s ability to accurately determine what is happening in its immediate vicinity so that it can take the appropriate action.
  • All machine vision systems in development today use a form of deep learning in order to analyse the data streams coming from cameras, lidars and radars and stitch together an accurate picture of their surroundings.
  • For any use case where the environment is both stable and finite, this is relatively easy and it is possible to teach a car to safely navigate a car park or a test track in a very short period of time.
  • Deep learning systems are statistical pattern recognition systems and as such when they are presented with a situation that they have not been explicitly taught, they fail, often catastrophically.
  • By contrast, humans are taught how to drive vehicles but can then take what they have learned from the lessons and apply it to situations that they did not encounter while learning.
  • This is because humans have causal understanding of driving vehicles while the machines do not.
  • Operators of robotaxis being tested today often try to get around this problem by operating in rigidly geofenced and/or very simple locations to limit the occurrence of so-called edge cases, but reality declines to cooperate enough of the time for them to be fundamentally unsafe for transport.
  • This is the same problem that causes large language models to hallucinate or make things up and is caused by the methods that are used while they are being trained.
  • The upshot of this is that ways need to be found to reduce the error rate to a point at which machines become safer than humans or find a new method of creating the algorithms in the first place.
  • Both of these options are under development, but progress is really slow and I do not expect to see any real results for some considerable amount of time.
  • Hence, I don’t think that autonomous driving is going anywhere beyond motorway driving with an alert driver for some time and I am sticking to my long-held target of 2028 before this becomes a reality.
  • Given the lack of progress, even this may start to look optimistic.
  • Consequently, falling valuations, shutdowns and dilutive consolidation look like they will continue for some time to come.
  • I have no real desire to get involved here beyond the lidar companies, many of which have plenty of use cases outside of autonomous driving.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.