Artificial Intelligence – Reality baseline

2 good use cases for LLMs

  • Bloomberg’s implementation of a chatbot using its own data and my own assessment of Bard, ChatGPT, Bing and so on leads to my conclusion that there are effectively 2 good use cases for large language models (LLMs) which could be very valuable within a reasonable time frame.
  • Every man and his dog are racing to develop their own chatbot based on large language models (LLM) regardless of whether or not this is a good idea and absolutely nobody is paying attention to how much they will cost to train and run.
  • The rational approach is to ask what benefit a chatbot might offer to one’s business (and there are some) and then weigh that benefit against what it will cost to train the algorithm and then run the service that the algorithm enables.
  • Instead what we are observing is a headlong rush into artificial intelligence services enabled by LLMs with everyone desperate to be seen addressing the opportunity without stopping to wonder what the opportunity really is or whether it makes any sense for them to address it.
  • I have spent quite a lot of time with these chatbots (they are all quite different) and I conclude that there are two really good use cases that will be of use now.
    • First, data catalogue: ChatGPT, Bing and Bard are all really good at ingesting massive amounts of data and then retrieving it in an easy-to-use and accurate way.
    • Their tendency to hallucinate and just make stuff up is an annoying bug but it can be mostly worked around.
    • For example, I have used these chatbots to find stuff that I know is on the internet somewhere but would take me hours to find.
    • Using these chatbots saves a lot of time and when they are asked to provide the source, they duly produce it meaning that fact-checking is usually possible allowing for the eradication of hallucination in many cases.
    • On this front, I find Bard to be better than both ChatGPT and Bing both of which are frozen in time and which I find to have a higher incidence of hallucination or data fabrication.
    • The Bloomberg use case is particularly relevant here as the 50bn parameter model has been trained on all of the financial data that the company has gathered over the years and has the potential to greatly improve on the awful search function that the company currently provides.
    • This offers companies an excellent way to make their corporate data much more accessible within their business but it does mean that they will have to train and run their own models in-house.
    • Second, vehicle man-machine interface. RFM research has long argued that all digital experiences in the vehicle are not good enough especially when the vehicle occupant has to pilot the vehicle (see here and here).
    • RFM’s research from 2017 also looked at the state of voice and found it badly lacking in terms of being able to provide a decent user experience (see here).
    • Looking back at the requirements that RFM research laid out at the time and comparing them against the latest generation of chatbots reveals that the new bots represent a substantial step forward.
    • This means that there is now potential for voice to offer a good user experience in the vehicle assuming that chatbots can do as well using voice as they can when using text and that these models can be economically deployed within the vehicle.
    • The key here is economics as in electric vehicles, power is everything as it determines the range of the vehicle which is still a key differentiator.
    • A massive LLM is likely to consume a lot of power and this needs to be carefully considered and properly implemented in the vehicle to make any sense.
    • I think that the LLM would need to be both trained specifically for the vehicle use case and also be deployed in the vehicle so that it will still work when there is intermittent network coverage.
  • These represent two use cases that offer the possibility to deliver a return on investment and so I find them to be a realistic use case for this new technology.
  • However, the frenzy at the moment sees everyone thinking that they can be used for almost anything which I am certain is not the case.
  • It is this that is driving the unreality bubble that is currently plaguing the sector and will lead to a period of disillusionment and despair once the limitations of these models begin to be properly taken into consideration.
  • Hence, data catalogue and vehicle MMI are where I would be putting my investments as this is where I think we will see the real returns within a reasonable period of time.
  • I would leave the rest to the fast money that will soon tire of this theme as soon as the practical realities emerge into the general public consciousness, or the fast money finds a new fad to chase.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.