Artificial Intelligence – Zero to hero.

Reply to this post

 

 

 

 

 

DeepMind AlphaZero is smarter with 1000x less effort.

  • DeepMind has taken another step forward in the quest for machine intelligence with the demonstration of the rapid training of a single algorithm to play Chess, Go and Shogi.
  • While this is without doubt another step forward, I do not consider that the second major challenge in AI is close to being solved.
  • RFM has identified three main challenges that need to be overcome for AI to really come of age (see here).
  • These problems are:
    • First: the ability to train AIs using much less data than today,
    • Second: the creation of an AI that can take what it has learned from one task and apply it to another and
    • Third: the creation of AI that can build its own models rather than relying on humans to do it.
  • DeepMind’s previous publication took a shot at problem one (see here) and while it represented an advance, I did not consider it to have really solved the problem.
  • Its current publication (see here) takes a shot at problem two, but again has made an advance, but in my opinion, has not really cracked the problem.
  • DeepMind describes a new algorithm called AlphaZero which is a generic version of AlphaGo Zero (Go algorithm (see here)).
  • It uses a deep neural network instead of the specific policy and value neural networks that were designed to play Go in AlphaGo Zero.
  • AlphaZero is then given the rules of Chess, Shogi (Japanese version of Chess) and Go and asked to play itself and to use reinforcement learning to improve.
  • In each case AlphaZero was quickly able to obtain a level of play that allowed it to beat the best algorithm available in each of the three games including the original AlphaGo Zero.
  • It is also highly relevant that AlphaZero did far less “thinking” than its opponents.
  • Each machine was given 1 minute of thinking time and during that time AlphaZero searched 80,000 positions per second for chess and 40,000 per second for Shogi while Stockfish (Chess) searched 70 million per second and Elmo (Shogi) searched 35 million per second.
  • In effect, AlphaZero expended 1000x fewer resources to arrive at a better solution than its opponents due to its use of its deep neural network to tell it where to search.
  • The ramifications for this are substantial as it implies that once trained, algorithms could be easily and efficiently executed on mobile devices where resources remain extremely constrained.
  • However, it is critical to recognise that for each game, DeepMind trained a different instance of AlphaZero.
  • DeepMind started with three instances of AlphaZero which were all identical other than they each had the rules for a different game.
  • However, through playing themselves and reinforcement learning they all diverged from one another as they gained expertise in the specific game they had been asked to play.
  • The end result is that despite a common starting point, the three algorithms become very different by the time that they are capable of playing these games at a very high level.
  • Consequently, to me this does not represent the solution to problem two because one cannot take the Chess version of AlphaZero and have it win at Shogi.
  • However, what it does do is represent a major step forward in the training of algorithms as the AlphaZeros all trained themselves and they all came from a common starting point.
  • This should make training of algorithms in the future easier, quicker and cheaper than they are today which is why this is yet another very significant advance that has been made by DeepMind.
  • Seeing that DeepMind is owned by Google, it is Google Ecosystem devices and services that are likely to benefit from these advances long before anyone does.
  • This will allow Google to differentiate its services more effectively and make them more appealing to users.
  • We gave already seen signs of this where Google is able to do portrait mode with one camera when everyone requires two.
  • This reconfirms my position that it is Google that leads the world in AI developments for digital ecosystems with Baidu and Yandex in 2nd and 3rd
  • Given, Alphabet’s exceptional stock performance this year, Baidu now makes the most interesting and cost-effective entry point for anyone looking to gain exposure to AI.

Google – Brain game pt. II.

Reply to this post

 

 

 

 

 

Google remains out front in AI but Baidu most interesting. 

  • The first results from Google’s AutoML project are beginning to surface and are implying once again (see here) that machines may end up being better coders than humans.
  • AutoML was announced at Google i/o in May 2017 and failed to attract much attention mainly because I suspect that most commentators did not grasp the significance of the concept.
  • AutoML is neural network that is capable selecting the best from a large group neural networks that are all being trained for a specific task
  • This is potentially a hugely important development as it marks a step forward in the quest to enable the machines to build their own AI models (challenge no. 3 (see below)).
  • Building models today is still a massively time and processor intensive task which is mostly done manually and is very expensive.
  • If machines can build and train their own models, a whole new range of possibilities is opened-up in terms of speed of development as well as the scope tasks that AI can be asked to perform.
  • RFM has highlighted automated model building as one of the major challenges (see here) of AI and if Google is starting to make progress here, it represents a further distancing of Google from its competitors when it comes to AI.
  • In the subsequent months since launch, AutoML has been used to build and manage a computer vision algorithm called NASNet.
  • AutoML has implemented reinforcement learning on NASNet to improve its ability to recognise objects in video streams in real time.
  • When this was tested against industry standards to compare it against other systems, NASNet outperformed every other system available and was marginally better than the best of the rest.
  • I think that this is significant because it is another example of when humans are absent from the training process, the algorithm demonstrates better performance compared to those trained by humans.
  • The previous example is AlphaGo Zero (see here).
  • I see this as a step forward in addressing RFM’s three big challenges of AI (see here) but there remains a very long way to go.
  • These problems are:
    • First: the ability to train AIs using much less data than today,
    • Second: the creation of an AI that can take what it has learned from one task and apply it to another and
    • Third: the creation of AI that can build its own models rather than relying on humans to do it.
  • When I look at the progress that has been made over the last year in AI, I think that Google has continued to distance itself from its competition.
  • Facebook had made some improvements around computer vision, but its overall AI remains so weak that it is being forced to hire 10,000 more humans because its machines are not up to the task (see here).
  • Consequently, I continue to see Google out front followed by Baidu and Yandex with Microsoft, Apple and Amazon making up the middle ground.
  • Facebook remains at the back of the pack and its financial performance next year is going to be hit by its inability to harness machine power.
  • For those looking to invest in AI excellence, Baidu is the place to look as its search business and valuation has been hard hit by Chinese regulation but is now starting to recover.
  • Baidu represents one of the cheapest ways to invest in AI available.

Digital sensors – Heart of the matter.

Reply to this post

 

 

 

 

 

Apple is creeping up on the medical devices industry. 

  • Apple has taken wearables a step closer to replacing medical devices, but the user experience is still so limited that the immediate term for the medical, devices industry still looks secure.
  • KardiaBand (AliveCor) is a strap for the Apple Watch which incorporates in it a sensor that is capable of producing a full electro cardiogram (ECG).
  • Critically this accessory has been approved by the FDA meaning that it is good enough to be a medical device producing medical data that can be relied on by a doctor.
  • The Apple Watch app that comes with KardiaBand can use the heart rate sensor on the Apple Watch to detect abnormalities and recommend to the user that he records his ECG.
  • Atrial fibrillation is a leading cause of stroke and it is thought that 66% of strokes could be prevented with early detection.
  • It is the signs of this that the KardiaBand app is looking for via the Apple Watch sensor which can then be confirmed through the recording of an ECG.
  • This does not come cheap at $199 for the band and $99 per year for the monitoring service, but if it works as advertised, I think it is a tiny price to pay for avoiding a stroke.
  • However, the use case is not ideal requiring a large metal plate to be present in the device’s strap and it does not offer always on monitoring.
  • This combined with its price means that it will only really appeal to users who are known to be at risk from stroke and it does not enable the replacement of an existing medical device.
  • I see the combination of Apple Watch and KardiaBand as a halfway house as it does not really offer real time monitoring to a medical grade, but it is a step in the right direction.
  • Sensors are becoming the eyes and ears of AI (see here), but almost all sensors are not nearly good enough to produce data that can be used in critical applications.
  • Nowhere is this more true than in eHealth where inaccurate data is useless at best and deadly at worst.
  • This is why there is still a big market for extremely expensive medical monitoring equipment, but I see signs everywhere that this will eventually come to an end.
  • This also explains the problems that the likes of Fitbit, Xiaomi, Garmin and others are having as the data they generate is of such low quality that it can really only be used for recreational fitness.
  • eHealth is where the quest for accurate data begins but I see this quickly spreading to other industry verticals.
  • Accurate sensors are one way to attack this problem but the other is to use better software to clean up and improve less-than-perfect data sources.
  • Google is a good example of this as it can use software to produce better imaging effects in portrait mode with one camera than Apple can with two (see here).
  • Given the substantial rewards that are on offer, I think that investment in improving the quality and accuracy of sensors will only continue to increase in the coming years.
  • This is an area where I would want to be involved.
  • The issue, of course, is to separate the solutions that have real prospects from those that are merely riding the wave of hype and easy investment.

Amazon – Size 12s

Reply to this post

 

 

 

 

 

Amazon is stomping on Microsoft’s patch.  

  • With the launch of Alexa for Business, Amazon is stomping with its size 12s all over the territory of its supposed new best friend, Microsoft and its digital assistant, Cortana.
  • Alexa for Business is expected to be launched next week at the AWS reinvent conference and will allow businesses to build their own skills for the digital assistant that can be used in a work context.
  • It will also feature all of the normal functionality such as enquiries and smart office and is expected to feature partners like Concur and WeWork at launch.
  • This has the scope to both generate more skills and applications for the Alexa digital assistant but also to generate increasing loyalty to AWS.
  • Some of these skills are likely to include integration with Office functionality such as calendar management, meeting room scheduling and so on.
  • If this takes off, there is no reason why this should not spread to the desktop and deeper into Microsoft’s core asset Office.
  • The issue here is that Microsoft already has a digital assistant called Cortana, and with Microsoft’s increasingly dominant position in the enterprise, this would seem to be an obvious opportunity for Cortana.
  • However, Cortana is struggling because it was originally designed to run on Windows Phone meaning that many of the skills that it has been taught are not relevant with the assistant sitting on the desktop.
  • Furthermore, Amazon and Microsoft recently announced a partnership where users will be able to ask Alexa to ask Cortana to do something and vice-a-versa.
  • Given Microsoft’s focus on the enterprise, I have been under the impression that the future for Cortana would be in the enterprise where it can be deeply integrated into Microsoft’s market leading apps.
  • At the same time, I assumed that the partnership would offer Amazon a way to use Alexa on the PC and in the enterprise.
  • However, it seems that Amazon is short-cutting its partner by going for the enterprise completely independently of its partnership with Microsoft.
  • The one area where Microsoft has a more relevant product than Amazon is in AI, where RFM has estimated that Microsoft is ahead of Amazon.
  • Consequently, I can see an eventual collaboration where Microsoft’s AI is used to drive Alexa’s services in the enterprise.
  • The only problem here is that this could result in cross over between Microsoft and Amazon Web Services who are fierce competitors in the cloud.
  • Hence, a deepening of this collaboration looks increasingly unlikely as this move puts Amazon against Microsoft in a new area in addition to the cloud.
  • Although Amazon appears to be getting the better of Microsoft, I still cannot stomach the valuation leaving me with a strong preference for Microsoft’s shares.

Google vs. Facebook – AI dividend.

Reply to this post

 

 

 

 

 

Google’s AI already paying dividends

  • Both Google and Facebook have a fake news problem but Google’s leadership in AI means that it is likely to have a better solution and will not have to materially impact the financial performance of the company to fix it.
  • Over the last 2 years, Google, Facebook, Twitter and so on have become far more important when it comes to delivering current events to users.
  • This is particularly relevant when certain events occur that result in regular citizens present at these events uploading videos and commentary long before the more established media outlets can arrive on the scene.
  • As a result, important information often appears on Google, Facebook and Twitter first, meaning that the accuracy and veracity of this information is of paramount importance.
  • Unfortunately, during these sorts of events, there is often a scarcity of information available making it the easiest time to successfully propagate fake news.
  • This is the problem with which both Facebook and Google are wrestling, but from looking at how both are dealing with it I think there is a huge gap between these two players.
    • Facebook: To combat this problem, Facebook has announced that the total number of employees working on safety and security will be doubled from 10,000 to 20,000.
    • Given that the total number of employees at the end of June 2017 was 20,658, this implies that 50% – 60% of all Facebook employees will be working in non-revenue producing positions.
    • This will mean that costs will meaningfully outstrip revenues leading to a “significant” decline in profitability.
    • These humans are being shipped in to deal with the problem because Facebook’s AI is not even close to being good enough to deal with it
    • Furthermore, I think that this is a problem that humans cannot really solve given the velocity that is required.
    • Google: to be fair to Facebook, Google’s data tends to be somewhat more structured than Facebook’s making it easier to analyse but this does not come close to explaining the difference in AI ability.
    • Although Google remains reluctant to discuss the methods it is using to combat this problem, this is something that it has been dealing with for many years and there has been no sudden increase in current for forecasted headcount.
    • There has also been no sudden decline in gross margins (current or forecasted) which would indicate that Google had taken on contractors to help fix the problem.
    • While Google does use fact checking services to ascertain the veracity of some of the content that appears in its searches, I think that almost all of its efforts are going into closing the loopholes in its algorithms that allow fake news to surface.
    • This is why there is no financial impact on Google from this problem compared to Facebook.
  • Furthermore, I think that using humans to combat fake news will end in failure.
  • This is because it takes the human system around 2-3 days to reliably label an article or item as fake by which time is has trended and already been seen by millions of users.
  • Consequently, I do not think that having tens of thousands of humans scouring Facebook for fake news will actually solve the problem.
  • Hence, I think that this will result in $1bn+ of shareholders money being wasted in every year that humans are being used.
  • This highlights the gravity of the AI problem that Facebook is trying to deal with and think it is one that Google is much closer to solving.
  • Hence, I see Google being able to far more effectively manage this problem and at a fraction of the cost.
  • From a shareholder value perspective, perhaps it is time to consider switching from Facebook to Google.

Facebook Q3 17– Organic brains.

Reply to this post

 

 

 

 

 

Humans are not the answer.

  • Facebook reported strong results but gave disappointing guidance as its shortcomings in AI mean that non-productive headcount is going to skyrocket, materially hurting profitability.
  • Q3 17 revenues / EPS were $10.3bn / $1.59 comfortably above forecasts of $9.8bn / $1.28.
  • Once again this was driven by growing engagement on mobile and users spending more time watching videos that have originated on Facebook.
  • Active users have now reached 2.07bn with 1.37bn using the service every day.
  • This sets Facebook up nicely to become by far the biggest ecosystem outside of China, but it still has a long way to go.
  • Currently Facebook is made up of a few discrete services which need to be migrated into an integrated suite of services where users can spend the majority of their digital lives.
  • This transition is underway and it is still a work in progress, but the commentary for this quarter reveals just how serious the AI problem is.
  • Facebook has a serious fake news problem and it also believes that some countries are using its platform to interfere in the political process of other nations.
  • To combat this, it has announced that the total number of employees working on safety and security will be doubled from 10,000 today to 20,000.
  • Given that the total number of employees at the end of June 2017 was 20,658, this implies that 50% – 60% of all Facebook employees will be working in non-revenue producing positions.
  • This will mean that costs will meaningfully outstrip revenues leading to a “significant” decline in profitability.
  • I strongly believe that there will be no need for a corresponding increase of headcount at Google, Baidu, Yandex, Microsoft, Amazon or Apple to deal with these problems as these companies are much better positioned to create a solution using AI.
  • In contrast it seems that whenever Facebook attempts to automate anything, it inevitably goes awry resulting in the need for more humans to fix the problem. (see here, here and here)
  • Furthermore, humans are ill suited to solve these kinds of problems as it takes far too long to find and remove the relevant content meaning that it will have already trended and been seen long before it is removed.
  • I think that this process has to be automated to be effective and as a result, Facebook’s costs are going to rise and the problem is unlikely to be solved.
  • Consequently, I think that the money would be far better invested in AI rather than in bodies as AI is how the problem will eventually be solved.
  • Until then the financial performance of the company is likely to suffer weakening the case for the shares to trade such a high multiple.
  • Consequently, I think that this is a good time to start thinking about taking some profits and reinvesting the proceeds elsewhere.
  • Long-term there is still upside, but this requires a much deeper Digital Life offering and world class AI both of which are still far from being achieved.
  • There is likely to be a better time and place to build a position in Facebook for the long-term.

Facebook – Lead bullet.

Reply to this post

 

 

 

 

 

A problem that humans can’t solve.

  • There is a silver bullet to deal with the fake news issue, but the problem is that Facebook is not even close to being able to produce one and is having to rely on old, ineffective bullets instead.
  • This problem has been around for a while but really came to light in the summer of 2016, following a move to automate the selection of trending stories on Facebook.
  • Simply put, Facebook’s AI is incapable of working out which stories are fake and which are true which led to false stories being highlighted by Facebook as treading.
  • Facebook’s reaction to this problem has been to throw humans at the problem and a Bloomberg investigation has indicated that this is not working well at all.
  • Facebook has outsourced fact checking to PolitiFact, Snopes, ABC News, factcheck.org and the Associated Press for the period of 12 months but this has been problematic.
  • In order to be flagged as disputed on Facebook, two of the contracted organisations have to mark the story as false at which point the number of users seeing the story is cut by around 80%.
  • This manual process takes about three days to complete in many cases, it takes much longer.
  • On Facebook this is effectively useless as many stories will have trended, been seen by millions of users and disappeared again long before the humans can mark the story as false.
  • Consequently, the only way to solve this problem is to have AI that scans stories as they begin trending and can accurately weed out the fake ones.
  • This is where Facebook comes unstuck as RFM research has found that when it comes to AI, Facebook’s position is very weak (see here).
  • This is not because Facebook does not have good employees in this area but merely because it has not been working on it for long enough.
  • I believe that currently, excellence in AI has very little to do with how many big brains one has on the bench but how long one has been crunching the data.
  • This is where Facebook really suffers as it has only been working on AI for a couple of years whereas Google, Baidu and Yandex have all been crunching data for over 20 years.
  • To be fair, Facebook has shown some progress on image and video recognition (see here) but on the recognition and elimination of fake news, I have seen none whatsoever.
  • As a result, I think that Facebook’s contention that there is no silver bullet to deal with the fake news problem is incorrect.
  • There is a silver bullet but the real problem that Facebook has is that it has no idea how to make it.
  • Until it figures this out, it looks to me like the fake news problem is here to stay.
  • This weakness in AI is not limited to fake news but shows up everywhere across Facebook’s services making it the biggest challenge that Facebook is facing.
  • This problem has to be solved properly for Facebook to achieve its long-term potential as a fully-fledged ecosystem offering deep and intuitive services to 2bn+ users.
  • It is based on this that I can make a case for liking Facebook long-term meaning that this has to be fixed at all costs.

Tokyo Motor Show – Not invented here.

Reply to this post

 

 

 

 

 

Existential challenge ahead.

  • The star of the Tokyo Motor Show is an electric sports vehicle designed by Toyota which has at its heart an assistant that I think neither Honda nor any other automaker has any chance of ever creating themselves.
  • The best hope is for automakers to licence or buy an assistant from elsewhere meaning that it is unlikely to provide them with an exclusive, differentiating product.
  • The problem is that digital assistants require a high level of AI in order to function properly which is a skill that none of the automakers, not even Tesla, posses.
  • RFM has defines three stages of speech recognition (see here):
    • First: High word accuracy.
    • This has largely been achieved by most speech recognition systems, but it is one thing to know what the user said but quite another to know what he meant.
    • Second: understanding what it is the user is asking for regardless of word order or manner of speech.
    • Third: the ability to understand context and circumstance.
    • I think that it is quite clear that not until machine understanding reaches this third stage that voice can have any hope of providing a user interface that would obviate the use of a screen and be really useful in a vehicle.
    • The two leaders in this space (Google Assistant and Amazon Alexa) both rely heavily on screens and both have products with screens either in the market or in development.
  • This is particularly relevant in the automotive industry where for the foreseeable future, the driver will have to have both his hands and his eyes occupied elsewhere.
  • Furthermore, it is clear that all of the user interfaces designed by the car makers, Apple and Google are not appropriate for use in the vehicle.
  • This is a main reason why I think that users still predominantly use their smartphones for digital services in the vehicle meaning that the best infotainment unit is still the one in the driver’s pocket.
  • In my opinion this represents a very serious risk for the car makers long-term.
  • This is because all of the value-added services that they are hoping to provide are likely to be delivered via embedded systems in the infotainment unit.
  • Consequently, unless the vehicle’s embedded infotainment unit can compete effectively with the smartphone, there is a real risk that all of their digital aspirations will come to naught.
  • In this case, the net result is likely to be the big ecosystems taking over the digital experience in the automobile causing the automakers to become little more than handsets on wheels.
  • This is a bleak outlook because I think that the automakers badly need revenue from digital services to help offset the weakness in their traditional business likely to be caused by the migration to electric vehicles.
  • Failure is not an option for those that wish to survive.

Google & Amazon – Battle for the home pt. VI

Reply to this post

 

 

 

 

 

Opportunities to break in are fast disappearing

  • Google seems to be closing on in launching a Google Home based product with a touchscreen which indicates that Google’s understanding of the smart home user experience is improving quickly.
  • This is bad news for others like Essential (see here) that are looking to compete in this space as both Google and Amazon are starting to make progress on addressing the areas where they have been weak in the smart home.
  • If Google can now improve its position with the developers of smart home products, it will be in a good position to really take the fight to Amazon which still dominates with over 70% share.
  • Earlier this year I identified two major problems with using voice-based digital assistants in the home.
  • These were:
    • First, voice control: RFM research (see here) has found that voice communication with machines is very far from being good enough to work effectively without a screen for output.
    • The issue is that even the best machines are not yet intelligent enough to provide a useful experience using voice-only and often have to fall back to a screen.
    • In Google Assistant’s and Alexa’s case has meant using the screen of a smartphone which is not an optimal experience especially as most voice usage occurs when the hands are busy doing something else.
    • At launch Essential Products had taken this into consideration as its small device (Essential Home) has an attractive looking screen on the top.
    • This looks much better than hideous Amazon Show which seems to have been designed to be a jack-of-all-trades (master of none).
    • I think that Essential hit the nail on the head and its product should optimally fix the single biggest current problem with human machine voice interaction with its integrated screen.
    • However, should Google come up with an attractive take on Google Home but with a screen, I think this will lessen the appeal of Essential Home materially.
    • Second, fragmentation: Despite Amazon Alexa being able to talk to almost everything, the experience has been horribly fragmented.
    • Google Home has been no better and has also suffered from their being fewer compatible devices.
    • The real use case for the smart home is where all elements in the home are aware of each other and can be controlled together.
    • For example, the use should be able to say “I am going to bed” resulting in the doors locking, blinds drawn, heating turned down and so on.
    • Instead each separate device has had to be manually operated and adjusted.
    • With each launching a service called “routines” (see here and here) both Google and Amazon have moved to start addressing this issue.
    • How well these “routines” work remains to be seen but critically, both companies have recognised the biggest problems with their services and are moving to address them.
  • The net result is that the opportunities for small differentiated services to break into this space by doing something better is closing fast.
  • This combined with the fact that developers will be making their devices work with Amazon first (and maybe Google) will make it even more difficult for smaller players to break-in.
  • Market penetration remains very low which means there is still a chance, but new entrants need to act fast as the big players are moving much more quickly than their size would indicate.

Artificial Intelligence – Go-getter.

Reply to this post

 

 

 

 

 

A breakthrough that Facebook badly needs.

  • Google DeepMind has reported substantial progress on one of the big three challenges of AI which is exactly what Facebook desperately needs but is unlikely to achieve anytime soon.
  • DeepMind has been able to build a new Go (AlphaGo Zero) algorithm that relies solely on self-play to improve and within 36 hours was able to defeat AlphaGo Lee (the one that beat Lee Sedol) 100 games to 0.
  • RFM has identified three main challenges that need to be overcome for AI to really come of age (see here).
  • These problems are:
    • First: the ability to train AIs using much less data than today,
    • Second: the creation of an AI that can take what it has learned from one task and apply it to another and
    • Third: the creation of AI that can build its own models rather than relying on humans to do it.
  • In my opinion DeepMind’s achievement represents a huge step forward in addressing the first challenge as AlphaGo Zero used no data at all.
  • I do not think that this represents a step forward against the third challenge as the system of board assessment and move prediction (but not the experience) used in AlphaGo Lee was also built into AlphaGo Zero.
  • Hence, I do not think that this system was building its own models but was instead using a framework that had already been developed to play and applying reinforcement learning to improve.
  • What will really have the likes of Elon Musk quaking in their boots is the fact that AlphaGo Zero was able to obtain a level of expertise of Go that has never been achieved by a human mind (see here figure 3).
  • It is almost as if the use human data limited the potential of the machine’s ability to maximise its potential.
  • That being said, it is one thing to become superhuman at Go and quite another to enslave the human race and so I continue to think that dystopia will continue to be thwarted by Dr. Moore (see here).
  • There have been many other attempts to address the data quantity problem but this is the first one that I have seen that has shown real progress.
  • Many of the other digital ecosystems have been trying to use computer generated images to train image and video recognition algorithms but there has been no real success to date.
  • I suspect that taking what DeepMind has achieved and applying it to real world AI problems like image and video recognition will be very difficult.
  • This is because the Go problem is based on highly structured data in a clearly defined environment whereas images, video, text, speech and so on are completely unstructured.
  • Hence, we are not about to see a sudden improvement in Google’s ability to recognise and categorize images and video (which is already world-leading) but the seeds are clearly being sown that will keep Google a long way ahead of everyone else.
  • This exactly the kind of advance that Facebook really needs to make.
  • This is because I have long been of the opinion that while Facebook sits on a massive treasure trove of data, it has very little idea of what any of it is or what it means.
  • This makes it very hard to spot fake news or offensive content which has been the source of many of Facebook’s most recent problems.
  • It also makes it much more difficult to understand what its users do and do not like and therefore much more challenging to tailor its service accordingly.
  • Finally, it will also make it much more difficult for Facebook to keep up with competition in terms of deep and rich services meaning that its users may begin to spend time elsewhere.
  • This is a breakthrough that Facebook badly needs but unfortunately it is Google that owns the IP meaning that it will be Google services that improve.
  • I continue to think that Google comfortably leads the world in AI but recent stock performance and the resulting high valuation keeps me indifferent to the shares.