Artificial Intelligence – Here comes Skynet?

I am not panicking yet.

  • Terrifying claims made in a scientific article in AI Magazine last month predict a high likelihood that the machines will turn against their makers but fails to acknowledge that the machines are so stupid that this is never likely to occur.
  • The main thrust of this paper (see here) is that as AI systems are pushed to maximise their rewards, they could end up triggering negative consequences for humans as a result.
  • An example cited is that the AI could end up directing so much energy to the solution of its tasks and therefore its rewards, that there would not be enough energy left to grow food, heat homes and so on.
  • Should humans intervene to take the energy back, then an existential catastrophe could occur which according to Cohen is “not just possible, but likely”
  • The lead author of the article is Michael Cohen who is a PhD student at the University of Oxford and the Future Humanity Institute who has been researching AGI Safety for his PhD (DPhil in Oxford).
  • His co-authors are Michael Osbourne, Professor of Machine Learning at Oxford (presumably his supervisor) and Marcus Hutter, a researcher at DeepMind.
  • The paper begins by making a number of assumptions which in my opinion is where the validity of the conclusions falls to pieces as in my experience, assumptions are the mother of all mistakes.
  • The paper ends with “if they (the assumptions) hold: a sufficiently advanced artificial agent would likely intervene in the provision of goal-information, with catastrophic consequences” which I would not necessarily disagree with.
  • However, it is the first assumption of six which I would contest.
  • Assumption No. 1 reads: “A sufficiently advanced agent will do at least human-level hypothesis generation regarding the dynamics of the unknown environment”.
  • In essence, this means that AI can perform difficult tasks at a human or better level of performance.
  • The task that the researchers give as an example is an AI being able to cure a patient of depression where a human therapist cannot.
  • Anyone who has used Google Assistant, Alexa, Siri, Xiaodu (Baidu), and Alice (Yandex) will have experienced just how stupid these machines are and that they are barely capable of the most basic functions let alone curing difficult patients with depression.
  • Furthermore, even the huge language models such as GPT-3 (see here) and LaMDA (see here) are fundamentally flawed in my opinion.
  • For example, Siri constantly wakes up without being asked to, Alexa constantly fails to turn off the lights, customer service chatbots never seem to have the answer to one’s query and Google has been known to direct me into a high-security military base when looking for the airport.
  • Furthermore, despite billions of dollars of development expenses, machines remain incapable of safely driving vehicles which is something almost every human on the planet can be easily taught to do.
  • It is also still incredibly difficult to teach a robot to walk on legs despite this task being something that most of the animal kingdom (if they have them) can do shortly after birth.
  • This raises the question of why are the machines so stupid and the answer is simply that they have no causal understanding of what they are doing.
  • Neural networks of all shapes and sizes are advanced pattern recognition systems and all of their conclusions are based on matching historical patterns to outcomes.
  • This means that if something changes or something new occurs within the task that the machine is trying to solve, then it will catastrophically fail.
  • In practice, this means that AI is excellent for tasks where the data set is both finite and stable but elsewhere it has great difficulty and is unable to generalise or extrapolate as humans can.
  • This is what is referred to as generalisation or being able to apply what one has learned in one task to another slightly different one.
  • This is by far the single biggest shortcoming in AI systems today and progress on solving it is glacial, to put it mildly.
  • There are plenty of researchers who are looking into this and over 10 years come up with almost nothing.
  • This problem is so acute in neural net systems that some even think that this whole method of creating AI should be thrown away and we should start again.
  • Hence, it could be 100 years before much progress is made and this paper is assuming that this problem has been solved.
  • While I agree with the conclusion that if the AI generalisation problem is solved, then there is something to worry about, it remains so far away and so uncertain that I am not going to lose sleep over it.
  • Skynet has a very very long wait before it can enslave or exterminate the human race.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.