TensorFlow Lite and the Raise of Mobile Deep Learning Runtimes
Mobile applications are one of the most important sources of sensory information for deep learning applications. However, until now, mobile devices have not been a valid runtime for deep learning programs which are typically destined fro GPU-optimized environments in cloud platforms. In a typical scenario, a mobile application collects certain data point in the fro of text, images or contextual information and then invokes a deep learning model via an API interface which process and data and outputs the results. That picture is changing rapidly as deep learning stacks are becoming available in mobile runtimes.
A few days ago at its I/O conference, Google announced TensorFlow Lite, a version of the popular deep learning framework that can run on Android devices. TensorFlow Lite allows developer to build leaner deep learning models optimized for Smartphone runtimes. The announcement was complemented by the release of the new version of the TensorFlow Processing Unit(TPU) chips which is increasingly powering a new generation of connected devices including smartphones.
tensorFlow Lite is not the only effort to bring deep learning to Smartphone runtimes. Last year, Facebook released Caffe2Go, a version of Caffe that enables the execution of deep learning models on different mobile OSs. In the near future, we are likely to see more deep learning frameworks such as Theano, Torch, Keras, MxNet, PaddlePaddle and others extend their capabilities to mobile runtimes.
Watson in Your Pocket: Mobile Cognitive Services Become Real
As smartphones become a viable runtime for deep learning models, the mobile OS providers will start including cognitive-AI services as a first class component of their runtimes just as common as geo-location, push notifications, accelerometers and others. A group of classic cognitive techniques such as sentiment analysis, knowledge extraction, image recognition, speech-to-text conversation, entity-intent analysis and several others are viable candidates to execute on mobile runtimes. From that perspective, mobile developers will start leveraging deep learning capabilities as a native component of mobile apps.
Multi-Runtime, Deep Learning Models
More and more, we are transitioning towards a multi-runtime AI world in which the same deep learning models can execute on different runtimes such as on-premise servers, cloud platforms, containers, IOT devices, smartphones, car consoles and several others. On each runtime, deep learning models will have to adapt to more efficiently take advantage of the specific capabilities of its environment.
TensorFlow Lite is the type of capability that can give Google an edge over Apple in the mobile OS wars. In the last two years, Apple has made a series of acquisitions of machine learning and AI talent. From that standpoint, it is likely that Apple could soon release a suite of mobile and cloud cognitive services with the next version of IOS. Additionally, Apple could leverage its monster balance sheet to make a game-changing acquisition in the AI space. Nvidia anybody….?
Nothing like a crazy thought to end the post ;)