OpenAI and DeepMind have made available a set of training tools that can, to some extend, be used by other artificial intelligence(AI) solutions. The releases attempt to address the increasingly complex challenges that AI developers are experiencing building comprehensive training experiences for AI solutions.
OpenAI Universe allows developers to training AI applications using human centric interfaces such as websites, games and other applications. The thought process is that as AI systems learn more like humans they can better resemble human intelligence. At a high level, OpenAI Universe is an example of a platform that is trying to build generally-intelligent systems that can master different types of knowledge.
General AI Training vs. Specific Intelligence
One of the biggest challenges of AI training is that the training processes are built for specific AI applications and focused on the application’s specific domain. Obviously, that approach constraints training models to very specific scenarios and prevents any type of knowledge reusability. From an anthropological standpoint, those models of training are the antithesis of human learning and intelligence development processes.
Platforms such as OpenAI Universe are trying to build generic AI training infrastructures that can be adapted to different types of AI knowledge. That model is a stepping stone towards building more “generally intelligent” systems.
Challenges of AI Training
If we assume that AI is going to be the foundational component of the next generation of software applications and that companies are going to be building many AI solutions in the near future, then we should assume that building individual training processes for each applications is a very inefficient strategy that is unlikely to scale very well. From that perspective, there are several challenges that we can identify on the current approach to AI training. Let’s explore a few:
— Knowledge Reusability: The current generation of AI systems uses representation of knowledge that are built almost at the application code level. as a result, there is minimum or no reusability of knowledge between AI systems even if they exist in the same domain.
— Cost: Domain-specific AI systems training can be incredibly expensive. In today’s AI ecosystem, companies such as IBM and Google are spending fortunes acquiring industry experts that can efficiently train AI solutions. That approach is prohibited for most companies that don’t enjoy IBM or Google’s healthy balance sheets.
— Tooling: The AI training tooling ecosystem is still in its infancy and is growing slower compared to AI frameworks and platforms. As a result, AI solutions spend considerable time and effort building basic training tools over and over again. Most of those AI training tools are not reusable across systems which results on a very limited training experience for the ecosystem in general.
— Knowledge Monitoring: AI systems training is a continuous process. However, most AI solutions experience challenges monitoring the quality and efficiency of they knowledge and improvements based on the training. As a result, most systems evolve without quantifiable methods to monitor and evaluate the efficiency of its knowledge.
— Lack of Standards: There is a strong consensus within the AI community about the models that can be used to represent knowledge in an AI system but that hasn’t translated onto standards that can be used across different AI platforms. Consequently, every AI system represents knowledge using its own proprietary models which are hardly reusable.
These are just some of the challenges about the training of AI systems. In a future post, we will discuss some possible solutions and technologies that are already tackling these challenges.