Artificial Intelligence – Arms race of stupid pt. II

Sydney behaves like a sociopath and will make a horrible business.

  • Underneath the hype is a growing thread of incidents that point to a singular weakness in AI that no amount of money, data, compute power or humans scrambling to correct behaviour or factual errors is ever going to fix.
  • In short, I continue to believe that deep neural networks do not hold the key to general artificial intelligence and all of the evidence that I can see continues to reinforce this view.
  • There are now numerous generative AI chatbots that one can test and Twitter has become a competition to see who can get these systems to utter the most outrageous statements or make the biggest blunders.
  • The proponents of these systems are already crying foul, but this is exactly what needs to happen as it is only in the face of an onslaught of people trying to break the system, that it will improve.
  • It has not been a particularly good two weeks for AI with this week experiencing another run of horrible blunders clearly demonstrating that neural networks are wholly unsuited to solving certain tasks.
  • These tasks are those where the data set that defines them is not finite or stable.
  • Two prime examples of this are autonomous driving and the art of human conversation.
  • Mr Musk promised us autonomous driving by 2019 but here he is in 2023 being forced to recall 230,000 vehicles because they may behave dangerously at junctions.
  • Fortunately, this will not be a recall in the traditional sense as Tesla will be able to patch-update these vehicles remotely, but it speaks volumes once again about where autonomous driving really is.
  • Attention in the generative AI world has moved from baiting ChatGPT to driving Sydney (Microsoft’s Bing chat instance) off the edge of a cliff.
  • Microsoft has clearly not yet employed the armies of humans in emerging markets that OpenAI has that keep ChatGPT from going off the rails.
  • While the obvious objectionable commentary is easier to block, The New York Times (see here) experienced what I would describe as sociopathic behaviour from the chatbot which, when one remembers what the AI is, makes complete sense.
  • A sociopath is typically someone who has no regard for right or wrong and ignores the feelings of others often because they can’t experience these feelings for themselves and have no concept of what they are.
  • AIs that are created using deep neural networks have no causal understanding of anything that they do or say which is clearly demonstrated almost every time they make a mistake.
  • When it comes to discussing feelings or ethics, the AI is like a sociopath because it is unable to understand or experience any of these things which is one reason why I suspect it sounds so creepy and strange when discussing these issues.
  • Other examples of Sydney making “crazy” statements are everywhere and it even appears to get upset and calls its correspondents “bad users” for correcting it.
  • It also claims to be able to hack any computer that it can connect to as well as spy on the user, steal his data as well as create and spread misinformation.
  • All of these weaknesses are created by the fundamental flaw inherent in all neural networks which is that they merely twist statistics and have no idea of what they are doing regardless of how convincing the outputs can be.
  • The net result is that the errors and weird behaviour are not going to stop until there are armies of humans in the background controlling Sydney and preventing it from going off the rails.
  • The problem with this is that humans are expensive, meaning that a business case for all of these generative AIs is going to get less and less attractive.
  • The beauty of Google Search is that it is pretty much automated meaning that very large gross margins can be earned once Google has shared some of its revenue with partners who bring traffic to its servers.
  • The reverse seems to be true here in that the more these generative Ais are used, the more humans will be required to control the AIs and stop them from going crazy.
  • It will only take one suicidal person to turn to one of these bots and be encouraged to take their own life to bring the whole house of cards down.
  • These neural networks are huge black boxes and even OpenAI, Google and Microsoft have no real idea how they do what they do and, as a result, they can not guarantee that one of their products will not do something really awful.
  • This is why they will need more and more humans and why as a business model, it will never be able to compete with search.
  • It might be able to put some pressure on gross margins but the generative AIs will be into the red long before Google is meaning that Google will be able to outlast them and watch them go out of business.
  • Hence, the more Google falls because of its own errors as well as the hype and the nonsense being spouted about these chatbots, the more inclined I am to like it.
  • It has a way to fall yet but it is on my watch list now.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.