Artificial Intelligence – Neurosymbolic Debate

Regular software has a role to play in advanced AI.

  • The leak of Claude Code demonstrates that if one wishes to get commercial-grade performance from an AI, one needs to include rules-based software, which supports the argument that the best performance will be achieved when using both together as opposed to simply making the models bigger and bigger.
  • The latest version of Claude Code is a big deal because it offers a step-wise improvement in the quality of the code that it produces, which is being seen as a sign by the scaling hype crowd that super-intelligent machines based on LLMs are closer.
  • However, when Anthopic updated Claude Code 2.1.88, it accidentally included a JavaScript file that contained what is known as a source map, which in turn allowed people to source, download and reconstruct the entire source code for Claude Code.
  • There is no hint of a hack or security breach, but simply an honest mistake by Anthropic, which led to the leak of source code.
  • Anthropic’s loss is everyone else’s gain, as competitors will be able to see exactly how Claude Code works and make copies.  
  • Most importantly of all was the discovery by LLM-realist Gary Marcus that Claude Code is a hybrid of software and neural networks (see here), which is precisely what both he and RFM Research (see here) have long argued is how the performance of AI can be greatly enhanced.
  • The situation is simple in that LLMs are great at learning but terrible at reasoning, while rules-based software is terrible at learning but great at reasoning.
  • Hence, it makes sense to put the two together as each is in a position to mitigate the weaknesses of the other, but with all the fuss and hype around LLMs, this branch of AI has become fairly neglected.
  • Deep inside the source code is a 3,167 kernel called print.ts, made up mostly of IF-THEN conditional code with 486 branch points, which is regular as software gets.
  • This strongly implies that when Anthropic could not get the model to an acceptable level of performance on its own, it had to rely on software to keep the model from doing silly things.
  • This is not the first time this has happened, as AlphaFold, AlphaGeometry and AlphaProof from Google DeepMind also contain software that greatly improves the model’s performance.
  • This adds great weight to RFM’s long-held view that a structured merger of models and software, rather than a massive, amorphous model, is the way to get the best results.
  • This merger of the two is known as neurosymbolic AI, and there is little doubt that Gary Marcus is its leading proponent, having taken up the cause more than 25 years ago.
  • RFM Research came across this branch of AI 6 years ago and quickly became convinced that it is the future of AI, as the empirical evidence suggests that one gets better results, better transparency into why the model does what it does, and somewhat lower inference costs.
  • These findings were published in 2020 and can be found here
  • Furthermore, I have long believed that the direction of the autonomous driving industry towards large end-to-end models is not the right way to go, and this source code leak supports that.
  • Consequently, I continue to think that the best AI agents, robots, and autonomous vehicles will be created using neurosymbolic AI.
  • The problem is that this approach argues against the scaling mantra that has kept the investment cycle running at a crazy pace, and so those invested in superintelligent machines being created by massive models are incentivised to play down this approach.
  • However, it looks to me like it is finally starting to make some headway, and as such it brings better AI, robotics and really useful AI agents one step closer.
  • If acceptance of neurosymbolic AI continues to grow, it will make me more optimistic, not less, as in my view (and Gary’s), the industry will finally be on the right path.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.

Blog Comments

Hi,

While I’m mostly in line with your AI vision, I do think that unfortunately the article from Gary Marcus is not representing the reality of what Claude Code is…and it is way less smart that what the article is describing. The truth is unfortunately not super rosy : check this more complete (and factual) analysis : https://techtrenches.dev/p/the-snake-that-ate-itself-what-claude

Leave a Comment to Thomas Menguy Cancel Comment