Deep learning is one of the most exciting disciplines of the artificial intelligence(AI) landscape. While we have recently seen an explosion on the number of deep learning frameworks such as TensorFlow or Microsoft cognitive Toolkit, the battle for dominance in the deep learning market has also extended to the hardware space.
In recent days, IBM and Nvidia announced a collaboration focused on deep learning technologies. as part of that effort, IBM announced a new deep learning toolkit called PowerAI that is optimized to run on the IBM Power S822LC server which features Nvidia NVLink technology.
IBM is not the only deep learning software vendor getting closer to the hardware space. Intel and Google are also collaborating in some interesting initiatives. Microsoft is also making inroads in the space as recently announced that the OpenAI foundation selected Azure as its primary cloud provider. Azure offers Nvidia NSeries GPU instances which are optimized for deep learning applications. OpenAIis currently using thousands of those machines as part of its deep learning pilots.
Hardware is called to play a relevant role in this initial phase of the deep learning technology ecosystem. While the specifics are yet to be determined, there are some interesting ideas about the role of deep learning hardware in the near future.
Some Thoughts About the Immediate Future of Deep Learning Hardware
Nvidia and Intel Battle for Deep Learning Chip Supremacy
Vendors such as Nvidia and Intel have been taking some very active steps to dominate the deep learning hardware space. Nvidia is closely collaborating with IBM and Microsoft while Intel has its efforts centered on Google. as the deep learning hardware space evolves, we should expect to see p more partnerships between the chip vendors and AI power-houses such as Microsoft, Google, Amazon or IBM.
PaaS Offer Deep Learning Specialized Infrastructure
GPU-optimized infrastructure will become a more common element of PaaS stacks. Following Azure’s steps, Google Cloud, AWS nad IBM are likely to start offering compute instances powered by deep learning chips in order to efficiently execute deep learning programs.
Deep Learning Frameworks are Optimized for Specific Hardware
As deep learning hardware evolves, it is likely that frameworks such as TensorFlow, Caffe, Torch and others will create specific versions optimized for specific chips. Those optimizations are likely to become relevant as deep learning hardware vendors can indirectly drive the adoption of deep learning stacks.
Deep Learning Hardware is Optimized for Specific Deep Learning Frameworks
Complementing the previous point, some deep learning hardware toolkits are likely to optimize its architecture for the execution of programs built on specific deep learning frameworks. Also, some specialized deep learning hardware toolkits might embed specific deep learning programs as part of its default configuration.
Deep Learning Hardware-Software Startups Emerge
The relationship between deep learning hardware and software is creating an opopeningor new startups that can combine both architectures to power the next generation of deep learning devices and applications.