Artificial Intelligence – Here comes Skynet pt. II.

The machines are not coming.

  • The idea that the world might not be far from “potentially scary” AI is as absurd now as it was when it was first uttered 50 years ago and is more about the creator of the flavour of the month talking up his creation.
  • Sam Altman, co-founder and CEO of OpenAI made statements on Twitter (see here) effectively saying that OpenAI is showing its wares while still not working properly to the world in order to give it time to adjust and deal with the potential that they represent.
  • He goes on to say that “potentially scary AI” may be just around the corner which is where he goes from reasonable to hyperbolic.
  • This immediately raises the idea that the machines are about to take over and that the human race will either be annihilated or enslaved to robot overlords.
  • Fortunately, all of the demonstrable evidence suggests that these machines are not nearly as capable as their proponents would have us believe and that there is no threat from the machines either now or in the very distant future.
  • ChatGPT, Sydney and so on are illusionists of the first order as they are capable of making users feel that they are sentient when in fact they are not.
  • Furthermore, they are also unable to distinguish fact from fiction and right from wrong.
  • This is because at the heart of these algorithms is a statistical pattern recognition system that has no causal understanding of the task it has been asked to solve.
  • This has been demonstrated many times both in academic literature and also in the real world.
  • My own example of ChatGPT defining a prime number, proving that 509 is prime and then going on to state that 509 is not a prime number (see here) is one among many.
  • The other problem with these systems is that they are black boxes as their operators have no idea how they work as all they can see are the inputs and the outputs.
  • Consequently, when ChatGPT or Sydney or LaMDA go haywire, they have no idea why they went haywire meaning that they have no real idea how to fix it.
  • This is a problem that also plagues the safety of autonomous driving as one needs to be certain of how the system works in order to ensure that it is safe.
  • This problem is even more extreme when considering the massive neural networks that have been created to power these chatbots that are supposed to be capable of discussing any topic.
  • Consequently, these systems are incapable of dealing with anything that they have not been explicitly taught meaning that they are effectively expressing the mean position of the Internet on any topic.
  • It also means that they can’t deal with randomness and chaos and the battlefield is probably one of the most random and chaotic environments imaginable.
  • Hence, I suspect that human soldiers would not have much difficulty in overcoming an army of robots, but the reality is that we are decades if not centuries away from this.
  • The dawn of the industrial revolution was in 1698 when the first commercially viable steam engine became available.
  • It took another 60 years for steam to begin changing the world and 140 years for steam-powered devices to transform the way people were living and working.
  • These generative AIs are not even close to being commercially viable and so even if they end up changing the world, we have a very long wait ahead of us.
  • However, as always, the imagination of the world gets caught up in the idea that it is here and now and every man and his dog now want to create one of these generative chat systems.
  • Sometime in the next 12 months, people will begin to realise that while they have some use like drafting outlines to aid content creation by humans, they have little real use.
  • The best use case is probably entertainment, and this is a big market in its own right, but it is not what everyone is trying to create these machines to do.
  • When these machines fail to deliver what is expected of them (as Alexa, Siri, and Google did beforehand) enthusiasm will wane and a period of disillusionment will arrive to be followed by falling investment and falling valuations.
  • In short, this is a bubble that heralds the 4th AI winter, and I am not sleeping with a taser under my pillow to short-circuit an invading army of robots.
  • The machines are not coming.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.