Facebook ParlAI and the Evolution of Artificial Intelligence Testing

Jesus Rodriguez
2 min readMay 22, 2017

Testing and training are two of the most important aspects of artificial intelligence(a AI) solutions. While a lot of effort has gone into launching new AI-Deep Learning frameworks, the testing platform and frameworks have not kept up the pace.

Last week, Facebook took a significant step towards improving the ecosystem of AI testing technologies with the open source release of ParlAI, a framework for testing, training and researching conversational models. ParlAI joins similar solutions in the AI industry as OpenAI’s Gym or DeepMind’s Lab which are aiming to build an ecosystem that can streamline the testing of AI applications.

ParlAI is the brainchild of the Facebook AI Research Lab(FAIR). With Facebook’s investment in bots and conversational technologies, it should not be a surprise that ParlAI’s initial focus has been on dialog and natural language processing techniques. In the future, we should expect ParlAI to expand onto other AI areas.

ParlAI brings together several capabilities that simplify the training and testing of AI models. The platform gives AI researchers quick access to highly curated datasets such as WebQuestions, bAblTasks, SQuAD and several others. The framework also makes it relatively simple for AI researchers to incorporate new algorithms that can be tested using the platform. ParlAI also provides integration with Amazon Mechanical Turk which enables the use of human tasks to curate data sources that can be used by AI models.

One of the areas on which ParlAI excels at is the bridge between natural language processing research and its practical availability. While there are many new conversational AI techniques pioneered by academic research, their scope is very often too narrow to be included in NLP stacks. ParlAI enables the testing and benchmarking of algorithms as well as their integration with other models to produce comprehensive conversational solutions.

Differently from solutions such as OpenAI’s Gym or DeepMind’s Lab that focus on general reinforcement learning models, ParlAI emphasizes on supervised conversational AI algorithms. This level of focus is likely to pay dividends for ParlAI in the short term as conversational applications are becoming the fastest growing discipline within the deep learning space.

5 Key Capabilities of AI Test Frameworks

As AI continues evolving, we are likely to see new testing frameworks similar to ParlAI. What are the key features to look for in these type of and? Here is an initial selection:

1 — Curated Datasets: AI testing frameworks should provide a portfolio of curated datasets that can be easily used in AI models.

2 — Interoperability with AI Application Development Frameworks: AI testing frameworks should seamlessly interoperate with AI application development frameworks such as Bonsai, TensorFlow, Theano and others.

3 — Curated Algorithms: AI testing frameworks should enable the addition of new algorithms that can be tested and trained.

4 — Algorithm Performance Monitoring: AI testing frameworks should provide mechanisms for monitoring and benchmarking the performance of AI models against specific datasets.

5 — Collaboration: AI testing frameworks should facilitate the collaboration between AI researchers in order to optimize models and datasets.

--

--

Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...