Artificial Intelligence – Regulatory debate pt. II.

An impossible problem.

  • China is pressing ahead with AI regulation which is going to be so problematic that it could end up destroying development in this key technology highlighting the conflict between the CCP’s need to control public discourse and its desire to excel at AI.
  • The proposed European regulation has exactly the same problems but the greater freedom that exists in Europe means that the issues will be more focused on accuracy rather than content.
  • A few common threads are emerging between the proposals being made by the Chinese and European regulators.
    • First veracity and accuracy: which will be almost impossible to achieve.
    • This is because almost all AI systems use the neural net architecture which are by their nature black boxes.
    • This means that the user can see the data going in and the answer coming out but has very little idea how the system arrived at the answer that it did.
    • Consequently, if the system’s creators can’t tell how it is performing its function, they will be unable to guarantee either its veracity or its accuracy.
    • Under a rigid regulatory scheme like China’s, the penalties for failing could be so extreme that it quickly becomes untenable to develop generative AI in China thereby killing what could have been a leadership position.
    • China is proposing that all participants obtain a license before they can make their systems available but it appears that the criteria will be based more on content than anything else.
    • In practice, I suspect that this means that as long as the chatbots do not say things that subvert state power or socialist values, then they will be allowed.
    • Second, hallucination: which refers to chatbots’ well-documented habit of making stuff up at random in response to requests.
    • I suspect that this will be more of a focus in Europe where there is greater freedom of speech and where factual accuracy will be crucial in any commercial use case.
    • Instead of saying “I don’t know”, generative AI has a tendency to make up a plausible answer which is caused by the fact that these systems have no real understanding of the tasks that they are being asked to perform.
    • Furthermore, because these systems are black boxes, it is very difficult to instruct the machine to say “I don’t know” because the machine can’t tell the difference between knowing the correct answer and just making one up.
    • The test systems available today like Bard, Bing, ChatGPT etc. get around this problem by refusing to answer any questions on topics where they are known to be particularly unreliable but this is not a viable solution.
    • This is because by limiting their function entirely, one is greatly diminishing their usefulness, performance and commerciality.
    • Third, objectivity: which will also be very difficult to achieve and is already a well-known problem in AI.
    • Regulators are demanding models that are completely objective which will extremely difficult to fulfil.
    • This is because all models take on the bias of their trainers as part of being taught which answers are good and should be uprated as opposed to those that should not.
    • Furthermore, different people have different views in terms of what is objective as a result of their own biases.
    • Consequently, the only way to guarantee objectivity is to prevent these systems from offering any answers on topics where there may be different opinions of what is objective and what is not.
    • This again would greatly limit the performance of generative AI and thereby its commercial viability.
  • The net result is that, unlike many other technologies, it is going to be extremely difficult to achieve the aims of regulation without greatly undermining its performance, usefulness and commerciality.
  • Furthermore, because it will be so difficult, compliance will also be more expensive than usual to achieve which will greatly hurt competition in the industry.
  • This is because only the big companies will be able to afford to comply meaning that small start-ups will be precluded from competing in this market.
  • This is why I continue to think that the regulation should target the humans and not the machines (see here) as the real risk is from malevolent humans ordering machines to do bad things, not the other way around.
  • Hence, I suspect that the best regulatory environment will be a low-touch system that is cheap and simple to comply with and targets restricting access of bad actors rather than the technology itself.
  • I continue to think that the machines are as dumb as ever, but their size and complexity have greatly enhanced their linguistic skills even if they are simply calculating the probability of words occurring next to each other.
  • This creates a convincing illusion of sentience which leads people to anthropomorphise these systems which in turn is what I think makes them much more capable of being used by bad actors.
  • Hence humans remain in far more danger from other humans than they are from the machines, and it is this that I think any regulation should target.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.