Artificial Intelligence – Regulatory debate

Regulation should target humans not machines.

  • The recent Congressional hearings greatly increase the prospect of some form of regulation of the AI industry, but this will have to be very carefully crafted to ensure that unintended consequences are minimised and that the USA does not hamstring itself in the technological arms race with China.
  • OpenAI CEO Sam Altman, NYU Professor Gary Marcus and Christina Montgomery from IBM jointly testified before Congress yesterday in what was generally a constructive and non-combative session.
  • The first topic under discussion was regulation and here, OpenAI and Microsoft are clearly open to working with the US Government and even seem to welcome the prospect of some form of regulation.
  • What seems to be on the cards is a regulatory agency that issues licenses to operators over a certain level of scale but even this is fraught with problems.
  • I continue to think that the real risk to humans is not from malevolent machines but from malevolent humans ordering the machines to do bad things and so some form of licensing may help.
  • At the same time, Mr Altman and Prof Markus called out the risk of a technocracy where AI is concentrated in the hands of a few large players who then would have unimaginable power to control and shape society.
  • This is one of the biggest dangers I see that would result from regulation because a regulatory environment increases the cost of doing business and creates an (often large) bias towards the larger companies as the smaller players cannot afford to comply.
  • This would see smaller players forced out of the market and consolidation towards a few larger players which is exactly one of the things regulation seems to be seeking to avoid.
  • Limiting the development of AI is also a non-starter for two main reasons:
    • First, the genie is already out of the bottle. Large language models and the technologies and know-how of how to create them are already widely available in the open-source community.
    • So large is this community, that there is speculation that the performance of open-source models may soon rival those of the large companies.
    • Placing restrictions on development will only serve to drive development underground (bad scenario) or drive it overseas (even worse scenario).
    • Consequently, this technology is going to be developed regardless of the regulatory environment and so the scheme that embraces it is far more likely to succeed than one that slows or holds it back.
    • Second, technology rivalry: where the USA (and increasingly the West) are locked in an ideological struggle with China.
    • This battle is currently being fought in the technology sector and semiconductors in particular, but it is now also starting to move into AI.
    • Unlike semiconductors, the USA and the West have a much weaker ability to restrict China’s development in this space as limiting access to training semiconductors will only slightly slow China’s development.
    • Hence, if the USA intentionally hobbles its own development, then this will hand an advantage to China which is the one thing that all parties in the US Government agree is a bad idea.
  • Hence, I suspect that the best regulatory environment will be a low-touch system that is cheap and simple to comply with and targets restricting access of bad actors rather than the technology itself.
  • Other areas discussed included the management of copyrights for content owners whose content is used for training and then becomes the genesis of a novel creation.
  • This is not a new issue as a similar problem exists with DJs who sample music or extracts of content to create new tracks and so I suspect that this will be solved over time.
  • Employment was also discussed with both Mr Altman and Prof Marcus of the opinion that the job market is no immediate danger although there is likely to be some change.
  • This is broadly in line with my own view which is discussed in more detail here.
  • This is the first time I have seen an industry asking to be regulated which gives a much better chance of getting regulation that is productive rather than the unintended consequences that so regularly occur when rules are unilaterally imposed.
  • I continue to think that the machines are as dumb as ever, but their size and complexity have greatly enhanced their linguistic skills even if they are simply calculating the probability of words occurring next to each other.
  • This creates a convincing illusion of sentience which leads people to anthropomorphise these systems which in turn is what I think makes them much more capable of being used by bad actors.
  • Hence humans are in far more danger from other humans than they are from the machines, and it is this that I think any regulation should target.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.