Apples Dives Into Unsupervised AI

Apple is the biggest name missing in the artificial intelligence(AI) and machine learning(ML) emerging technology ecosystem but that seems to be changing. A few days ago, researchers from Apple’s ML group published a paper that describes a new simulated and unsupervised learning technique to improve the quality of synthetic training images.

The paper is one of Apple’s most visible recent releases in the AI-ML space and one that indicates that the company expect to be competitive in a field that has been dominated by companies such as Microsoft, IBM, Amazon, Facebook and Google. The subject of the paper is also very pragmatic from Apple’s technology standpoint.

Vision intelligence is one of the capabilities that is essential to modern IOS apps. Synthetic images are a very popular approach to train vision ML models as it is less costly and more flexible than using real images. The main challenge with synthetic images is that the quality is sometimes inferior than real images which an affect the effectiveness in the training of vision intelligence models. Apple is trying to address this limitation by leveraging a technique called Generative Adversarial Networks(GANs).

In a nutshell, GANs are based on adversarial and competitive dynamics between different neural networks. In the case of vision intelligence, GAN’s algorithms use a simulator that generate synthetic images that are sent to a refiner which improves them and routes them to a discriminator whose task is to distinguish real images from synthetic ones.

Apple’s paper proposes an approach that improves over traditional GANs algorithms by trying to minimize the differences between synthetic and real images while also minimizing the differences between synthetic and refined images. The “adversarial” notion of the algorithm comes from the fact that the different neural networks are actively “competing” to minimize the maximum possible loss on any given iteration.

It seems obvious that Apple’s interest on GANs to improve vision intelligence techniques could be relevant to the IPhone and IPad ecosystems. Beyond the obvious improvements on the cognitive capabilities of some Apple applications, it would be interesting to see how Apple enables new AI-ML capabilities. In my opinion, I would like to see cognitive capabilities as a first-class citizen of the IOS stack. Here are a few ideas of what we might see from Apple in the AI-ML space in the short term:

— Cognitive Services: Apple might soon add cognitive services in areas such as vision, natural language, speech, knowledge, etc to it cloud services portfolio.

— IOS Cognitive Toolkit: Following the approach of frameworks such as IOS Health and Home Kits, I would like to see a Cognitive Toolkit that makes it easier for developers to incorporate cognitive capabilities such as vision intelligence, natural language processing, speech processing and other intelligent features to IOS applications.

— Siri Skills: Most likely this won’t happen but wouldn’t be cool if you could add new cognitive skills to Siri just like we extend services such as Alexa, Cortana or Allo?

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store