Facebook – Dial M pt. II

Reply to this post

 

 

 

 

 

Very little AI in the assistant that has launched.

  • Now that Facebook’s digital assistant is available in the wild, one can see how simplistic it is indicating just how far Facebook still has to go to get a real grip on making its services more intelligent.
  • Facebook M has been in beta for over 18 months and has comprised of a combination of automated responses and human interactions where the vast majority of the tasks have been carried out by humans.
  • The problem with using humans of Digital Life services is that it is very expensive to scale the service for 2bn users especially when the service will be funded by advertising.
  • This why Facebook is working as quickly as it can to develop its in house expertise and while it remains a laggard in AI, it has shown some progress.
  • For example, at its developer conference (see here), it showed some good progress on machine vision enabling its apps to recognise the world they can see through the smartphone camera.
  • It also made Facebook M available to US users and most recently in Spanish to users in Mexico and US.
  • However, what has gone live is only a small part of the grand plans that were announced in 2015 which had an always on, all knowing bot with which the user could do almost anything.
  • Instead, Facebook M is limited to suggestions referred to as “M suggestions” which are contextually sensitive pop ups that appear in messenger when the user types messages.
  • For example, hello (or hola) results in the suggestion of emoticons that are waving or “tomorrow” can result in the suggestion of a link to the calendar to create an appointment.
  • The available functions are very limited leading to believe that each function has been manually programmed using statistical analysis meaning that there is virtually no AI in the service that has launched.
  • Although, the service is extremely limited at present, Facebook has created a placeholder ready to be upgraded when its ready, as well the possibility to generate some data that should help improve what is already there.
  • Most of the AI that I can see in Facebook is in machine vision where Facebook demonstrated some progress at F8 (see here).
  • However, outside of mixed reality, the immediate applications for this in Facebook’s ecosystem remain quite limited.
  • This reinforces my opinion that Facebook is way behind when it comes to AI and that the biggest challenge it faces is to bring its AI into line with that of its main rivals.
  • The problem is that its rivals are starting to use AI to improve the depth, richness and utility of their services potentially leaving Facebook behind.
  • To keep up, Facebook currently throws humans at its AI related problems (eg fake news and objectionable content) which is clearly not scalable.
  • Unless the AI problem is fixed, Facebook will have to employ more and more humans leaving its EBIT margins, valuation and competitiveness at risk.
  • Facebook has some time to address this problem as its newer Digital Life services of Gaming and Media Consumption have scope to keep revenue growth going in the medium term (see here).
  • This is why I like Facebook’s investment potential but I am waiting for the short-term fall in revenue growth (see here) to be priced in before pulling the trigger.

Alphabet – Goodbye blue sky pt. II.

Reply to this post

 

 

 

 

 

Homeless robots find permanent shelter.

  • Alphabet has reached a deal to sell both Boston Dynamics and Schaft to SoftBank leaving it more focused on its core business of collection and monetisation of Internet data.
  • Boston Dynamics is a robotics company that specialises in robots that are autonomous as far as navigating and adjusting to their immediate environment.
  • SoftBank is also acquiring Schaft from Google which is a humanoid robotics company that was spun out of the University of Tokyo.
  • These robots can move around with relative ease but how they would be able generate value for Alphabet shareholders was always unclear.
  • At the end of the day Alphabet is a data and analytics company whose objective is to categorise and understand every piece of digital information that users generate and to sell those insights to marketers.
  • Every other piece of hardware that Alphabet makes from Google Home to Pixel and Internet Balloons, have the capacity to collect huge amounts of data and thereby generate can value to the core business.
  • Autonomous robots that can carry out physical tasks do not generate data about users because they are designed to replace them making them a bad fit inside Alphabet.
  • Furthermore, the robotics effort at Google was the brainchild of Andy Rubin and his departure, combined with the much greater focus on fiscal discipline, meant that the robots became homeless inside Alphabet.
  • I have long believed that Boston Dynamics will be much more at home inside a company that can make use of them.
  • Good examples of this are Amazon and Alibaba for logistics or someone like DHL or UPS.
  • Softbank is a good example of this but also has the benefit of a very long-term mindset when it comes to its strategy.
  • SoftBank already produces the Pepper robot which is supposed to be able to read human emotions and help shoppers when they enter a shop or place of business.
  • I met Pepper when wandering the halls of Mobile World Congress and CES and have to admit I was not that impressed by what it was capable of.
  • Consequently, it looks like SoftBank needs to really beef up its robotics expertise if it wants to be a player in this space which is what these two acquisitions should start to accomplish.
  • Hence it looks like this acquisition will not be part of the $93bn Vision Fund but instead be part of SoftBank itself.
  • Boston Dynamics, Schaft and I suspect SoftBank’s own robotics division have been struggling to find ways to generate revenue necessitating a home with a very long-term view.
  • That home used to be Alphabet, now it is SoftBank.
  • The sale of these two businesses will further boost Alphabet’s short term financial performance but I continue to think that all of the recent fundamental improvement in Alphabet is more than discounted in the share price.
  • Hence, I continue to prefer Tencent, Baidu and Microsoft.

Samsung Bixby – Failure to launch

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Bixby is not fit for purpose.

  • Samsung has once again delayed the roll out of the voice component of its digital assistant Bixby further reinforcing my opinion that Samsung can really only compete in hardware.
  • This, combined with the poor performance already offered by Bixby services on the Galaxy s8, leaves me unsurprised that a method to rewire the Bixby hard key to Google Assistant has already been published.
  • Bixby was launched with much fanfare at the unveiling of the Galaxy s8 and promised the following:
    • First, completeness: This promises to give users complete control of enabled apps rather than the few tasks offered by other assistants.
    • Second contextual awareness: Samsung is promising that Bixby will be aware of the context within which it has been triggered, making it more relevant and useful.
    • Third natural language recognition: Bixby should be able to understand complex, multi-part questions as well as prompt the user to clarify the pieces that it does not understand.
  • I have been testing Bixby extensively and so far, the experience bears no resemblance whatsoever to these promises.
  • Instead Bixby offers a series of suggestions of videos to watch and articles to read that bear little relevance to any of my interests or my history.
  • The one thing that Bixby can get right is to highlight which apps I use most but the functionality of suggesting which app I am likely to want to use next based on the time of day or my circumstance is nowhere to be seen.
  • These features are very similar to those promised by Viv, the artificial intelligence company that Samsung purchased in October 2016 which is clearly the source of this product.
  • However, It appears that Bixby as it exists today has nothing to do with Viv which partly explains the poor functionality but also makes me wonder why Samsung acquired it in the first place.
  • This is a sure indicator of just how far behind Samsung is compared to everyone else when it comes to developing intelligent services.
  • RFM research (see here) has identified three stages of voice recognition of which the first and by far the most simple is the accurate conversion of voice to text.
  • Almost everyone, even Facebook, has pretty much cleared this hurdle but it appears that Bixby still has not.
  • Furthermore, Bixby vision is also way behind the curve as it is unable to properly identify objects.
  • Instead what it does is search Pinterest for other pictures with similar pixel patterns.
  • It does not identify objects nor offer any real functionality beyond finding similar pictures rendering it useless.
  • Even Facebook, which I have long identified as being behind in AI, is demonstrating reasonably good machine vision which leads me to put Samsung far behind even Facebook.
  • This leaves Samsung exactly where I left it as a manufacturer of excellent but commoditised hardware that outsells it nearest competitor by more than 2 to 1.
  • As long as it can maintain that edge, I have no fear for its handset margins but Huawei is trying very hard to close the gap.
  • Huawei’s disappointing handset performance in 2016 has led it to be more focused on profitability this year meaning that it will not be trying to turn the screws on Samsung with quite the same vigour.
  • Hence, I think that Samsung is set up to have a good 2017 but the rally in the share price has more than taken this into account.
  • Hence I continue to prefer Microsoft, Tencent and Baidu.

Essential Products – Not essential.

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Andy Rubin still works for Google. 

  • Essential Products Inc has launched a series of products aimed at creating an ecosystem but none of them do anything or enable anything that is desperately new.
  • Consequently, the real beneficiary from another nice looking, easy to use phone will be Google whose ecosystem will be front and centre of the flagship device.
  • Essential Products Inc. was founded by Android creator Andy Rubin and has launched two devices and two accessories in a bid to stitch together the fragmented smart home space.

Phone

  • The Phone is similar to the Galaxy s8 although its screen is lower resolution, not waterproof and the battery has a slightly lower capacity.
  • Its one major area of differentiation is that the chassis is made from injected Titanium and has a ceramic back, potentially making it much more resistant to being dropped and scratched.
  • When it comes to screen protection, both are using Gorilla Glass 5 meaning that resistance to screen smashing should be about the same.
  • It also has two pins on the back (much like the Moto Mods concept) to which accessories can be attached.
  • The API for the accessory pins will be made available to developers to create their own devices to attach to the phone.
  • However, it has the price to match at $699 compared to $750 for the Galaxy s8 which is where I think the trouble will begin.
  • Phone is nice looking but I can’t see how it does anything that is not already available and outside of chassis resistance, Samsung gives more hardware bang for the buck.

Home

  • Essential products has also launched a voice activated home controller that aims to bring the smart home together in one place.
  • This is something that the smart home badly needs as the Alexa user experience is dire and hardly any products and services work with Google Home.
  • This product is different for two main reasons:
    • First: it is not designed to play music unlike other offerings although it does has a small speaker like the Echo Dot.
    • Instead, it is aimed at bringing all of the home’s devices together into a single place to manage them in an easy and fun to use way.
    • This device is also able to integrate these products such that smart devices can work together in new, fun and potentially very useful ways.
    • For example, when the timer goes off, the room’s lights can be flashed on and off rather than the generic alarm bell sound that everyone else uses.
    • Second: Home has a small screen on the top that is designed to enhance communication and interaction with the user.
    • RFM research (see here) has found that voice communication with machines is very far from being good enough to work effectively without a screen for output.
    • Consequently, this configuration makes a lot of sense.
  • The device runs its own OS called Ambient OS but Essential intends to open this up completely such that anyone can write functionality for the product.
  • This device takes a massive risk because 70% of the usage of devices in this category is as a Bluetooth speaker.
  • Consequently, there is a sizeable risk that this device will not appeal to the majority of users looking to buy something in this category.
  • Another big issue is the source of the AI that will be running Home as this will be the heart and soul of this product and the AI in Ambient OS currently looks as dubious as Bixby (see here).

Accessories.

  • Essential products has launched a charging plate for the Phone that connects through the two pins as well as a 360 degree camera.
  • I think that the charging plate is pretty useless as wireless charging is starting to come of age and inclusion of one of the standards in the device would have enabled a good user experience with products already present in the market and in users’ hands.
  • For example, because the Galaxy S8 supports Qi charging it will work with any compatible pad.

Take Home Message.

  • When I originally wrote on Essential Products (see here), my view was that it needed to produce must have devices and in that regard, I think it has failed.
  • The Phone is a Google Ecosystem device with a few nice features but less bells and whistles than the Samsung Galaxy S8 for almost the same price.
  • The Home has the most potential but it is taking an awful risk in that it is not addressing by far the biggest use case and has dubious AI.
  • It will also be dependent on third party developers meaning that it will need volume but even in its best case it is not going to out-ship Google Assistant or Amazon Alexa.
  • Consequently, I remain unconvinced with regards to what is special and different about Essential Products and suspect that many consumer electronics buyers will feel the same way.
  • Differentiation in hardware is extremely difficult meaning that Andy Rubin needs to have some software tricks up his sleeve that he is yet to show.
  • Failing that, it seems that this company will end up enriching Google more than itself.

Google DeepMind – Pebble hanging.

Reply to this post

RFM AvatarSmall

 

 

 

 

 

AlphaGo’s retirement shows that humans are still needed. 

  • AlphaGo is hanging up its pebbles after emphatically demonstrating that from here on, machines will be better Go players than humans.
  • This move also indicates that despite being one of the most advanced AI’s developed, it will still consume a huge amount of human resources to keep it running.
  • Last week AlphaGo crushed the world’s best player Ke Jie 3 to 0 in a convincing display that has left little doubt that human rule of this game is now over.
  • DeepMind, the Google owned developer of the AlphaGo, has decided to retire the algorithm and focus on more useful areas such as health, material sciences or clean energy.
  • This makes complete sense as DeepMind has proved its point with regards to its AI prowess but also since it published its methodology for AlphaGo, it has already been copied.
  • For example, Tencent is very keen to show the it has a strong presence in AI and recently its AI Go player called Jueyi was able to play to a very high standard.
  • However, on inspection it appears that Jueyi is little more than a carbon copy of AlphaGo, leading me to completely discount Jueyi as an example of Tencent’s prowess in AI (see here).
  • This is possible because AI is a co-operative field and DeepMind has published most of its methodology and results for the creation of AlphaGo in the scientific magazine Nature.
  • Most importantly, I think that the retirement of AlphaGo indicates that to keep it going would still require a lot of human time and effort.
  • AIs need to be constantly evolved to keep up with how the task for which they have been created is changing.
  • Although, AlphaGo was touted as an AIs that could do a lot of learning by itself, the reality was that much of its crucial learning was human supervised thereby consuming resources.
  • One of RFM three goals of AI (see here) is the creation of AIs that can build their own models and while there is plenty of evidence that researchers are working hard in this problem, results have been pretty scant to date.
  • If AlphaGo could be left to its own devices, there would have been little reason to retire it, but seeing as it will consume resources that can be productively deployed elsewhere, it makes no sense to keep it going.
  • This is yet another sign of how nascent AI really is as I think that many of the capabilities which the big ecosystem companies would have us believe are just around the corner, are actually years away.
  • I think translators, executive assistants, personal trainers and so on have plenty of time to find other lines of business.

Google i/o 2017 – Brain game

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Superior brains being used to make its services the best.

  • Google held the first day of its annual developer conference and in its keynote, it highlighted the features and improvements that it is making to its ecosystem to keep users engaged while gathering and categorising as much data as it can.
  • Artificial Intelligence headlined the event with Google’s leading expertise now being implemented in everything that it does.
  • These included:
    • First, Google Lens. This is machine vision similar to what many others have also announced but in Google’s case I suspect it will work properly.
    • This can be used to identify items which combined with search to bring up relevant information about it.
    • This stretches from the history and background of a place to the ratings users have given to restaurants and shops.
    • Others fall short in the ability to identify items as well as in the digging up of relevant information about the item.
    • This is because the AI they are using to power the service is not nearly as advanced as Google’s.
    • This functionality is being rolled across all of Google’s properties to enhance everything Google does such as the Photos app, Maps, Daydream and so on.
    • Second, AutoML. This is a research project within the Google.ai initiative.
    • It is neural network that is capable selecting the best from a large group neural networks that are all being trained for a specific task
    • While few details were disclosed, Google said that the results achieved to date were encouraging.
    • This is a hugely important development as it marks a step forward in the quest to enable the machines to build their own AI models.
    • Building models today is still a massively time and processor intensive task which is mostly done manually and is very expensive.
    • If machines can build and train their own models, a whole new range of possibilities is opened-up in terms of speed of development as well as the scope tasks that AI can be asked to perform.
    • RFM has highlighted automated model building as one of the major challenges (see here) of AI and if Google is starting to make progress here, it represents a further distancing of Google from its competitors when it comes to AI.
  • Google also gave updates on all the current products and services including the next version of Android: Android O.
  • Most relevant updates included:
    • First, Android. There are now over 2bn active Android devices in the market but I suspect that there is meaningful multiple device ownership.
    • For example in Brazil there are more mobile phone connections than there are people, highlighting that multiple devices are owned by a large number of people.
    • This is a trend that is mirrored in many other emerging markets.
    • Every Google Android device has a Google sign-in and for the other Google services, the figures are closer to 1bn which also includes those that have iOS devices.
    • Hence, in terms of real unique users rather than devices, I think the numbers are much lower.
    • This is important because it is unique users that generate the revenue for Google and hence they are a better measure of the real penetration of Android across the globe.
    • Second, Android Go. This is the relaunch of the failed Android One project which aimed to put smartphones in the hands of more users which obviously, requires much lower cost.
    • Android Go is like a mini-mode of Android O which runs in an optimised way on devices with memory down to 512MB of RAM.
    • Google’s apps have also been optimised to run in this highly constrained environment.
    • Importantly, functionality has been added that focuses on saving data usage as well as offering complete control of data usage from the device.
    • For the lower income users, data has become almost like a currency and this gives them much better control of their “spending”.
    • This looks like a much better proposition than Android One which was highly restrictive to the handset makers.
    • However if they start tinkering with Android Go (as they always do), there is a good chance that all of these good improvement will vanish into thin air.
  • While this is not the most exciting i/o event in terms of new announcements, it is what is going on with AI that has the most implications for Google’s outlook.
  • AI is now embedded in everything and because Google is clearly the global leader it has the scope to make its services richer and more intuitive than anyone else’s.
  • This is critical because this is how Google will win over more users to its services, generate more traffic and therefore more revenue.
  • However, I think that much of this is already embedded in the share price and I continue to prefer Baidu, Tencent and Microsoft.

 

Autonomous Autos – The back foot.

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Uber is now even more on the back foot.

  • The partnership between Waymo and Lyft puts Lyft streets ahead of Uber when it comes to developing autonomous cars but is likely to cost it heavily in the coinage of data.
  • Uber has described autonomous autos as “existential” to its long-term future and in that regard this partnership represents a huge threat.
  • This is because when it comes to autonomous driving, Uber is by far the worst.
  • It is worse even than the dull old OEMs that everyone derides as being hopelessly unprepared for the changes coming in their industry.
  • Data from the California DMV analysed by RFM (see here) showed that Waymo is 5000x better at autonomous driving than Uber is.
  • Furthermore, Uber was also comfortably beaten by BMW, Nissan, Tesla and Mercedes.
  • Uber, Lyft, Didi and the other ride hailing companies operate market places where drivers and riders are matched making their economics exactly like that of classifieds.
  • This means the to make money a player, needs to have 60% market share or be double the size of its nearest competitor.
  • This is why I am of the opinion that its time Uber started trying to make money in the US (see here) and Didi should be trying in China where it is now unopposed. (see here).
  • Against that backdrop, Lyft looks doomed except that by signing a partnership with Waymo, it is now in pole position to have by far the best autonomous solution and be there first.
  • From this partnership, Google gets a route to market and a source of data whereas Lyft gets access to technology that it is unlikely to be able to develop on its own.
  • The problem that all the ride hailing companies face is that if all cars become autonomous, then their current businesses become obsolete as, while there will be riders, there will be no drivers.
  • This is why they must be present in this space as it will give them the ability to migrate from human to robot drivers as the technology comes to market.
  • I have long been of the opinion that this is going to take much longer than expected.
  • This is not because the technology is not ready but because the market is unprepared to receive it (see here).
  • This gives Uber time to catch up but the example of Waymo indicates that developing this technology is more difficult than many think and it requires a vast amount of practice (miles driven).
  • I still think that autonomous vehicles will not become a market reality much before 2030, meaning that the field is wide open but this partnership puts Uber even more on the back foot than it already was.

Microsoft BUILD – The right choices.

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Enterprise remains the focus.

  • At Microsoft’s developer conference, it continued to emphasise its move away from being a platform for the consumption of content to one that is primarily for the creation of content.
  • At the same time it cemented its move away from mobile with the migration of its strategy from cloud first, mobile first to intelligent cloud, intelligent edge.
  • Effectively, Microsoft is signalling two main changes:
    • First, device agnostic: Microsoft no longer cares what device the user has, but instead is aiming to ensure that its services work seamlessly across everything that is available.
    • This was embedded in every presentation during the first two days of BUILD where cross device was emphasised time and again.
    • Cortana, Office 365, team collaboration and communication will be increasingly integrated across all the devices that the user has.
    • This was made very clear with the announcement of the cloud powered clipboard where text and pictures copied to the clipboard on the PC can be pasted into non-Microsoft apps on iOS or Android devices.
    • Microsoft employees no longer have to hide their iPhones and Galaxies or take off their Apple Watches when entering hallowed ground in Redmond.
    • I have long argued that cross device is a good way to differentiate an ecosystem that is vying for engagement with the two giants Apple and Google.
    • Microsoft has led in this space for a long time and, as long as this works as billed, it will take Microsoft further into the lead.
    • Second, processing at the edge: Microsoft discussed a future where all the processing does not happen in the cloud but part of it is redistributed to the edge for faster response times and greater efficiency.
    • Microsoft demonstrated how running diagnostics locally could cut an emergency shutdown time for a piece of industrial equipment from 2000 millisecond to just 100.
    • However, this is a problem that is supposed to be solved by 5G, which was not mentioned once, further cementing Microsoft’s move away from mobile as a standalone technology.
    • This goes directly against what Intel (and others) is aiming for as its most profitable and highest market share products are the processors that power the cloud meaning that it wants as much as possible to run there.
    • I see a number of schools of thought with regard to how intelligence and processing should be distributed throughout the network with each proponent obviously going for the option that benefits their business the most.
    • I think that the reality will be that different use cases require different scenarios.
    • For example simple monitoring that requires rapid response makes sense in the edge but object recognition and tracking and relating that to policies is a very intensive task that is best carried out on big servers in the cloud.
  • Microsoft also announced the fall creators update for Windows 10 to support all the cross-device capability as well as badly needed improvements to the Windows Store that is needed to give Windows 10 S a chance (see here).
  • Hololens was also upgraded with the addition of a controller to bring it into line with the other offerings but this remains very much a tool for the enterprise.
  • This was clear in the demos and examples which were focused around productivity with the idea of a virtual shoot out in the living room, thankfully not being repeated.
  • With every presentation that passes, Microsoft distances itself further and further away from content consumption and the consumer.
  • Consequently, while there is a strong rationale to keep Bing (data generation), I cannot say the same for Xbox, Minecraft and a number of other assets.
  • Hence, I would not be surprised to see them sold off should a good opportunity present itself.
  • The net result is that Microsoft is doing exactly what it should in playing to its strengths and differentiating where it has a chance rather than wasting money trying make a difference where it has no chance.
  • This sets it up for steady growth with its dominant position in the enterprise, still giving support to the valuation even though the shares have been strong.
  • I still like Microsoft alongside Baidu and Tencent.

Amazon – Show and tell.

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Screens help to alleviate digital assistants’ stupidity.

  • I think that the Echo Show is more about addressing the shortcomings of voice interaction with machines than it is about launching a series of new and exciting Digital Life services.
  • Amazon has launched an ugly looking device called Echo Show that is effectively Alexa with a 7 inch screen attached to the front.
  • The form factor is disappointing as even Baidu with no hardware experience managed to come up with a far more appealing looking product (see here).
  • Amazon has also upgraded the speakers to give a louder and richer sound profile but I see this being about giving Alexa another medium with which to communicate with the user given the limitations of voice.
  • The problem is simply that Alexa (and all other others) are far too stupid to be able to hold a meaningful conversation with a user.
  • Google Assistant is currently the best but remains woefully short of what one would consider to be a useful assistant.
  • Digital assistants were designed to replace the human variety but because their intelligence is so limited, they are unable to hold a coherent conversation with the user.
  • Human assistants do not need to use screens to understand requests, relay information and carry out tasks meaning that the perfect digital assistant should not either.
  • Hence, I think that the Echo Show has been created to make up for the huge shortfall in Alexa’s cognitive ability
  • This type of interaction is what RFM refers to as one-way voice where the user asks a question and the results are displayed on a screen.
  • RFM research has found (see here) that the vast majority of all man to machine interactions are one-way voice and with this device, Amazon makes these interactions easier.
  • Furthermore, for those that depend on advertising having a screen also helps to maintain the business model of lacing a Digital Life service such as Search or Social Networking with advertising.
  • Consequently, I think that Google is likely to follow up with a similar product which will take advantage of the fact that the necessary communication apps that the device will use are already installed and ready to use on all new GMS Android compliant devices.
  • In Alexa’s case, it looks like the user will have install another app on his phone in order to communicate with the Echo Show.
  • The Echo Show will come with all of 12,000 Alexa’s skills but these skills have been designed for a device with no screen and so I do not see the screen improving the already very poor user experience that these skills currently offer.
  • At $230 or two for $350, the Echo Show is priced to sell but I think that volumes will be small given that the vast majority of Echo’s shipments are made up by the cheapest member of the family, the $50 Echo Dot.
  • Hence, I do not see a sudden rush by developers to upgrade their existing skills or develop new ones to make use of the screen.
  • This is where Google Assistant has a huge advantage as it has already been designed to run with a screen (smartphones) meaning that adapting to having a screen on the Google Home product should be much easier and much better.
  • I still think that Google Home has the advantage here as it has a much better assistant than Alexa, but its lack of developer support for the smart home is starting to be a real problem.
  • Google really needs to pull its finger out and show developers love, especially as Microsoft looks set launch something similar to Echo Show but using Cortana.
  • I continue to struggle with Amazon’s share price whose valuation I think demands that investors pay for profits that never seem to materialise.

Digital Assistants – Bursting bandwagon.

Reply to this post

RFM AvatarSmall

 

 

 

 

 

Digital assistant bandwagon bursting at the seams.  

  • Building a digital assistant is all the rage these days but just like app stores, I suspect that the weaker players will soon drop out once they begin to realise how difficult and how expensive it is to make a good one that users actually want to interface with.
  • The latest companies to jump on the already-full-to-bursting digital assistant bandwagon are Orange and Deutsche Telecom who together are creating a digital assistant called Djingo which can exist in a speaker, remote control or smartphone app.
  • Its functionality looks to be very similar to Amazon Alexa with both companies pouring their combined knowledge and experience in artificial intelligence (AI) into the product.
  • Other recent additions to the bandwagon include, LINE with Clova, Huawei and Samsung with Bixby.
  • However, I suspect that all of these players are going to quickly discover that digital assistants are really difficult to get right.
  • For example, Alexa, which is considered to be a leader, can only accurately interpret the words the user speaks but really struggles to make any real sense from them.
  • The net result is that the user has to give commands to Alexa in a specific way if the desired result is to be achieved.
  • RFM research (see here) has found that digital assistants also suffer from a chicken and egg problem where they need usage to improve because it is with usage data that they can evolve.
  • The problem is that no one will use them if they are not already very good meaning they will be unable to gather the data they need to get to level of quality where users will engage with them.
  • Alexa and Siri, with 10m and 1bn+ deployed devices relatively, have scope to generate data but I think that both of them are struggling as usage remains low.
  • For example, by far the most used feature of the Amazon Echo device (Alexa’s flagship home) is the Bluetooth speaker which completely obviates any usage of the Alexa digital assistant.
  • This leaves Google and Baidu leading the field both of whom are global leaders in both AI and data generation which are the two most important raw materials for the creation of a good digital assistant.
  • Despite my negative view on the new comers, it is worth noting that mobile operators are providers of the data packets which deliver digital life services and consequently have huge repositories of data.
  • Operators are restricted in terms of what they can do with this data, but I see no reason why this data should not be used to train algorithms.
  • These algorithms could then be used to ensure that the services that they offer are better than those of their competitors or they could be licensed to third parties.
  • What operators lack is the artificial intelligence expertise to make anything of this data and as a result, I suspect that the vast majority of this data will end up gathering dust.
  • Whether Orange and Deutsche Telekom have realised this potential remains to be seen but given their history, I suspect they are just jumping on the bandwagon in a last attempt to avoid being left behind.