WWDC 2017 Highlights: Machine Learning, Siri and ARKit

Every year, Apple uses the WWDC (Worldwide Developer Conference) as an opportunity to  update developers, businesses and consumers on impending software enhancements in relation to macOS, watchOS, tvOS and iOS.

The event itself tends to be light when it comes to new hardware announcements, but WWDC 2017 proved to be different. Apple showcased plans for new Macs and iPads as well as the new HomePod product, which promises to be a significant competitor for the Amazon Echo and Google Assistant. We recently covered the Top 10 VR development companies and the Top 10 AR development companies in the world, today we’re exploring the highlights of WWDC 2017 and what they mean for developers and brands.

One thing is clear based on the output of WWDC 2017: Apple is set to go big on Machine Learning (ML), voice activated search (Siri) and Augmented Reality (AR). Both ML and AR appear to be huge areas of emphasis for Apple moving forward.

Here are some of the key aspects of WWDC 2017:

Machine Learning

There were a number of buzzwords that were consistent throughout the various presentations, and Machine Learning was a phrase that cropped up repeatedly; with ‘trained models’ being the most prominent.

If you’re unfamiliar, ‘trained models’ relate to the development of Machine Learning based applications. The process of developing and training an ML based model involves creating a learning algorithm that has training data to learn from. The expression ‘ML model’ refers to the model artifact that is created as a result of the training process.

So, in the context of WWDC and what this means for businesses, Apple suggested their focus would be on running trained models directly on users existing iOS devices (rather than running trained models on a server).

If you’re a company considering using Machine Learning this means you’ll need to work out how to train specific models on a server before deploying them to Apple devices such as iPhones and iPads. This does, however, still open up huge commercial opportunities for businesses.

All in all, this is a massive step forward for Apple and for software companies as well. This form of remote training (combined with local execution) can enable you to bring real-time machine-learning functionality into existing mobile apps.

From a consumer facing standpoint this is likely to have a considerable impact on how mobile apps are conceived and developed. One example would be, say, an app that uses sentiment analysis to assess the emotional tone of an email message. Users could be notified that an email contains an overly emotional or angry tone before the email has been read.

Similarly, if a user in the process of creating an email and some of the language is perceived to be unfriendly and liable to incite the wrong reaction, this type of technology can be used to flag up a warning signal that may prevent a damaging email from being sent.

Other uses of this technology would include creating more complex and better AI players within a game, or from a commercial perspective, using trained models to segment and profile existing customers into distinct groups.

If you’re interested in learning more about trained models, or how to incorporate them into an existing or new app project, we would recommend thinking about the data that your organisation currently owns and to think about which trained models could be used for machine-learning at mobile device level. You can then consider how you can collect more data to train each model.

Siri

Siri has always had a bit of a special place in Apple’s heart, and WWDC continued that trend, with an emphasis on how much smarter Siri has got over the years.

Mary Meeker has suggested that the current accuracy of voice recognition technology is roughly 95%. She also highlighted the fact that there is a massive difference between 95% and 99% when it comes to voice activated search and that as the technology improves, and accuracy increase from 95% to 99%, consumers will go from hardly using the technology at all to using it all the time.

With Amazon Echo and the Google Assistant products, mobile app developers can create their own interaction intents and domains. This essentially enables these types of devices and software applications to grow at the speed of the Internet. One of the major problems that Apple faces is the fact that it’s not as easy for app developers to access core elements of the Siri technology, which in turn is slowing down the evolution of the platform.

It seems that Apple are either unable or unwilling at this point to provide developers with more flexibility when it comes to incorporating voice activated search into future development plans and projects. The Apple HomePod has enormous potential, but without a rich and immersive ecosystem of apps, it’s unlikely to generate the traction in the marketplace that Amazon Alexa and Google assistant have successfully garnered.

Apple did mention the fact that they have already enhanced the functionality of the on-device natural language processing that’s currently available to app developers. This means that apps developed using this technique can now gather audio from the device microphone, transcribe the audio and derive some actionable insight based on what each user says to the app itself.

If you’re a business or brand wondering how voice activated search will impact your business, the time to start planning is now. If your business involves search engine optimisation, it’s liable that voice activated search will change everything as users shift from performing search queries through typing, to asking the search engine questions in a conversational style.

 

That’s everything for this week, ARKit is such a big announcement, with long reaching ramifications, we’ve done a whole separate post on it!

Check back next week to read all about it.