The Fragmentation Problem in the Deep Learning Space

Jesus Rodriguez
3 min readMay 8, 2017

Deep learning technologies have been evolving at an incredibly rapid pace during the last couple of years. Part of that evolution has produced a large number of deep learning frameworks all with their own merits and decent levels of adoption. Despite the many benefits that those technologies are bringing to the deep learning space, they are also causing a level of fragmentation that can be harmful to the market in the long term.

By the standards of the software industry, deep learning is a very young discipline. And yet, the number of frameworks in the space seems to have exploded out of control. If we just focus on open source , deep learning frameworks we can count a very large number of relevant technologies such as TensorFlow, Torch, Theano, Caffe, Microsoft Cognitive Toolkit, DeepLearning4J, Chainer, MxNext, PaddlePaddle, Keras and many others. Each one of those frameworks has achieved enough credibility and market traction to make it a relevant option for customers. However, if you are a developer or company embarking in a deep learning initiative, the task of selecting a technology stack can result nothing short of a nightmare. Ahh yes, on purpose eI omitted the cloud deep learning platforms provided by incumbents such as Microsoft, Google or Amazon because I think that those will eventually converge to become runtimes for the open source deep learning frameworks. That’s a discussion for another post ;)

What is the Real Problem?

In principle, there is nothing wrong with the proliferation of so many open source deep learning frameworks. Quite the opposite, the innovation created by those stacks is helping to push the deep learning space forward. While the benefits are certainly unquestionable, the increasing fragmentation of the deep learning technology landscape is also concerning.

For starters, deep learning frameworks are emerging and evolving a multiple faster than their corresponding runtime environments. As a result, the vast majority of deep learning stacks today lack key features such as training, management and optimization tools as well as robust runtime environments to execute deep learning models. Secondly, I feel that more time is devoted to create new deep learning frameworks that look just like the existing ones instead of rapidly advancing the capabilities of a smaller number of the lead deep learning stacks.

A Necessary Step to Live in a Fragmented Deep Learning Ecosystem

We are still in the very early stages of the deep learning revolution and the industry needs to find its own path forward. However, in all the permutations of the future of deep learning technologies I can think of, there are a few common denominators.

We need to rapidly advance the capabilities of runtime stacks and tools that can interoperate with several of the lead deep learning frameworks in the market. Specifically, I am referring to runtime environments that can execute and scale programs written in frameworks such as TensorFlow, Theano, Torch and others. Similarly, the ecosystem desperately needs better tools for training, optimizing and monitoring deep learning models authored on those frameworks. Better runtime and tooling that can work across different deep learning stacks is a key step for the evolution of the space. If we use an analogy from another highly fragmented space (programming languages) we need to build the “Atlassian of the Deep Learning market”.

--

--

Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...