Technology Fridays: Accelerating Artificial Intelligence with WinML and ONNX

Image for post
Image for post

Welcome to Technology Fridays! Today we are going to discuss a technology that was just announced a few days ago but that can have some profound implications in the world of artificial intelligence(AI). Earlier this week, Microsoft announced the release of Windows Machine Learning(WinML), a new stack that will enable developers to build Windows applications that leverage hardware accelerate for AI.

From the technical standpoint, WinML enables the execution of pre-trained machine learning models in Windows 10 devices. This capability can make machine learning capabilities more accessible in internet of things(IOT) topologies based on Windows 10 architectures. By leveraging WinML, Windows 10 devices would be able to perform local computations without dependencies on centralized cloud services which should play a role streamline the implementation of intelligent edge computing applications.

In terms of capabilities, there are a few things worth highlighting in the initial release of WinML:

· Hardware Acceleration: On DirectX12 capable devices, Windows ML accelerates the evaluation of Deep Learning models using the GPU. CPU optimizations additionally enable high-performance evaluation of both classical ML and Deep Learning algorithms.

· Local Evaluation: Windows ML evaluates on local hardware, removing concerns of connectivity, bandwith, and data privacy. Local evaluation also enables low latency and high performance for quick evaluation results.

· Image Processing: For computer vision scenarios, Windows ML simplifies and optimizes the use of image, video, and camera data by handling frame pre-processing and providing camera pipeline setup for model input.

ONNX-Based

The magic of WinML is based on another super cool AI initiative that Microsoft has been working on for a few months. The Open Neural Network Exchange (ONNX) is a project created by Microsoft and Facebook to define a computational graph model that can be used across different deep learning frameworks. Technologies such as Tensorflow, Keras, Microsoft Cognitive Toolkit or Caffe2 have been developing support for ONNX. If you are interested on learning more about WinML, go back to this article I wrote shortly after the initial release analyzing some of the market implications of the technology stack.

How is ONNX related to WinML? Very simply, WinML relies on ONNX as the main encoding format for machine learning models. The current release of WinML includes a series of tools that facilitates the conversion of other formats into WinML. Specifically, WinML tools support conversion from the following toolkits:

· Apple CoreML

· scikit-learn (subset of models convertible to ONNX)

· LibSVM

· XGBoost

Developers can start using WinML by downloading the latest Windows SDK and start converting trained models. Once converted, WinML models can be used across any Windows 10 device without taking dependencies on specific hardware.

Competitors?

WinML is pretty much on a league of its own. There are current AI hardware acceleration packages such as the Nvidia CUDA stack but those requires optimization for specific hardware models. The release of WinML can help to streamline the optimization of WinML across the large portfolio of Windows 10 devices in the market.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store