Artificial Intelligence – Armageddon debate

Humans are the real risk.

  • The big names in AI are now saying that mitigating the risk of the robots enslaving the human race is as important as dealing with pandemics or nuclear war which I think is either peak hyperbole or a cynical attempt to keep the smaller players out of the nascent AI industry.
  • This statement from the Center for AI Safety is a one-sentence statement and is made most interesting by those who have not signed it as opposed to those that have.
  • Three notable absences are Gary Marcus, Professor Emeritus at NYU, Andrew Ng, former head of AI at Baidu and Yann LeCun, Chief Scientist at Meta Platforms.
  • All three of these scientists more or less believe as I do that the machines are not thinking but are merely computing statistical word orders and, as such, remain as dumb as ever.
  • The big question is whether these large language models (LLMs) have somehow obtained the ability to think or are they just able to simulate thought by having been trained on so much data.
  • The nature of an LLM means that it is in essence a black box where no one really has any idea how it does what it does and so obtaining a definitive answer to this question is currently not possible.
  • However, there are indications and these all point to the machines not thinking but simply regurgitating and reconstructing data.
  • Every time one of these machines makes an obvious mistake or makes something up (constant), this is an indication of its inability to think.
  • Furthermore, the machines are still completely incapable of performing well outside of the boundaries of things that they have been explicitly taught which is a second indicator of an inability to think.
  • I think what is happening is that these machines have ingested so much data and writings from humans that they are able to use thoughts from humans that are in their datasets and pass them off as their own.
  • If they have no thoughts of their own, then they will have no desires beyond those with which they have been programmed meaning that any risk that may exist is derived from humans, not machines.
  • Furthermore, the notion that the risk of human extinction from AI is on a level with a global pandemic or nuclear war is absurd and sounds to me like peak hype.
  • The thermonuclear device has been in existence for decades and there are enough of them in the world to kill everyone several times over and the global population has been hit multiple times with pandemics throughout history.
  • These agents of Armageddon are real and already exist but somehow another agent which does not exist today and may never exist at all is now thought to be as dangerous by the big thinkers of AI.
  • When considered in these terms, this statement makes very little sense, and it makes one wonder whether there is another agenda at work here.
  • All of the big companies that stand to gain from the growth of generative AI have signed this statement which could be interpreted as another attempt to spook governments into regulating AI.
  • The EU already has some draft proposals in the works that will be incredibly onerous and expensive to comply with (not unusual) meaning that only the biggest and richest companies will be able to afford to comply.
  • When combined with the explosion of AI training being carried out by open-source enthusiasts and tinkerers (see here), it is not hard to see how the big companies might be concerned about being commoditised from the bottom, providing an incentive to stop it in its tracks.
  • Onerous regulation will do precisely that and hamper any real competition from small companies and start-ups.
  • The end result is that I see no evidence whatsoever of these machines doing any thinking at all and can only conclude that super-intelligent AI and robotic Armageddon is as far away today as it ever has been.
  • I continue to think that the real threat of generative AI will be driven by humans who desire to commit bad deeds and use it to do so.
  • This is where the focus on prevention needs to be and I think it also needs to bear in mind that blanket regulation has the potential to damage the development of AI that has legitimate, lawful and very profitable use cases.
  • Still no sign of those killer robots coming to enslave us all.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.

Blog Comments

You fail to understand Machine Learning. The clue is in the name. All AI is written in English. The guys at Google were interacting with their AI and suddenly it began to answer them in Farsi and they have ABSOLUTELY NO IDEA how it learnt this. Amd it learns at an exponential rate. There’s your problem. We have already opened Pandora’s box and it may already be too late.