Artificial Intelligence – Regulatory debate pt. III.

Regulation will be better if targeted at people, not machines.

  • Governments and regulators are moving to impose regulations on Artificial Intelligence but a lack of understanding on their part may mean that regulation ends up destroying most of the value that large language models bring.
  • Furthermore, their requirements for certainty could easily lead to certain functionalities being simply lobotomised resulting in services that fall far short of their potential.
  • A few common threads are emerging between the proposals being made by different regulators most of which will be very difficult to achieve without meaningfully affecting functionality.
    • First veracity and accuracy: which will be almost impossible to achieve.
    • This is because almost all AI systems use the neural net architecture which are by their nature black boxes.
    • This means that the user can see the data going in and the answer coming out but has very little idea how the system arrived at the answer that it did.
    • Consequently, if the system’s creators can’t tell how it is performing its function, they will be unable to guarantee either its veracity or its accuracy.
    • Under a rigid regulatory scheme like China’s, the penalties for failing could be so extreme that it quickly becomes untenable to develop generative AI in China thereby killing what could have been a leadership position.
    • Several companies have obtained licences to release their models to users and by all accounts the results so far are not very exciting.
    • In practice, I suspect that this is because the suppliers of those models will need to be so risk-averse that they will have had to shut down abilities that could produce interesting or useful results.
    • Second, hallucination: which refers to chatbots’ well-documented habit of making stuff up at random in response to requests.
    • I suspect that this will be more of a focus in the West where there is greater freedom of speech and where factual accuracy will be crucial in any commercial use case.
    • Instead of saying “I don’t know”, generative AI has a tendency to make up a plausible answer which is caused by the fact that these systems have no real understanding of the tasks that they are being asked to perform.
    • Furthermore, because these systems are black boxes, it is very difficult to instruct the machine to say “I don’t know” because the machine can’t tell the difference between knowing the correct answer and just making one up.
    • The test systems available today like Bard, Bing, ChatGPT etc. get around this problem by refusing to answer any questions on topics where they are known to be particularly unreliable but this is not a viable solution.
    • This is because by limiting their function entirely, one is greatly diminishing their usefulness, performance and commerciality.
    • Third, objectivity: which will also be very difficult to achieve and is already a well-known problem in AI.
    • Regulators are demanding models that are completely objective which will be extremely difficult to fulfil.
    • This is because all models take on the bias of their trainers as part of being taught which answers are good and should be uprated as opposed to those that should not.
    • Furthermore, different people have different views in terms of what is objective as a result of their own biases.
    • Consequently, the only way to guarantee objectivity is to prevent these systems from offering any answers on topics where there may be different opinions of what is objective and what is not.
    • This again would greatly limit the performance of generative AI and thereby its commercial viability.
  • The net result is that, unlike many other technologies, it is going to be extremely difficult to achieve the aims of regulation without greatly undermining its performance, usefulness and commerciality.
  • Regulation is also likely to greatly impact competition which is why I suspect the likes of Meta, Microsoft and Google are relatively open to having rules imposed upon them.
  • Regulation will make life harder for the smaller companies thereby reducing the pressure on the larger companies who have greater resources with which to ensure compliance.
  • Hence, I suspect that the best regulatory environment will be a low-touch system that is cheap and simple to comply with and targets restricting access of bad actors rather than the technology itself.
  • I continue to think that the machines are as dumb as ever, but their size and complexity have greatly enhanced their linguistic skills even if they are simply calculating the probability of words occurring next to each other.
  • This creates a convincing illusion of sentience which leads people to anthropomorphise these systems which in turn is what I think makes them much more capable of being used by bad actors.
  • Hence humans remain in far more danger from other humans than they are from the machines, and it is this that I think any regulation should target.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.