Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

From the Editor: Do Neural Networks Hallucinate ?

Overfitting is one of the best known challenges of artificial intelligence(AI) applications. Conceptually, overfitting describes the scenario in which neural networks infer patterns from training datasets that don’t exist in the real world. Not surprisingly, overfitting if often compared to hallucinations and has been a well-accepted concept in machine learning systems. Recently, researchers from the Massachusetts Institute of Technology(MIT) published a study that challenges the notion that neural networks hallucinate.

In their groundbreaking study, the researchers showed empirical evidence that the results of many examples in image classifiers that are classified as hallucinations only look like hallucinations to people. In reality, the AI system are identifying tiny details that are imperceptible to the human eye. The explanation is obviously hard to qualify as we use subjective criteria’s to identify objects. When the hallucinated results where analyzed in details, it turns out that they correct based on the specific aspects of the images that were analyzed by the algorithms. From that perspective, a lot of the hallucinations in neural network are more a result of incorrect training datasets rather than incorrect reasonings in the algorithms. Certainly fascinating….

Now let’s take a look at the core developments in AI research and technology this week:

AI Research

Microsoft Researchers published a paper proposing a Bayesian method for addressing the balance between exploration and exploitation in deep learning systems.

>Read more in this blog post from Microsoft Research

AI researchers from the Massachusetts Institute of Technology developed a new technique to improve the robustness of neural networks for real world scenarios.

>Read more in this article from MIT News

Uber published a research paper outlining their implementation of the fascinating Lottery Ticket Hypothesis to improve the training of neural networks.

>Read more in this blog post from the Uber Engineering team

Cool Tech Releases

Microsoft open sourced InterpretML, a library for improving the interpretability of machine learning models.

>Read more in this blog post from Microsoft Research

An AutoML solution created by Google successfully competed in a Kaggle tournament for structured data analysis.

>Read more in this blog post from Google Research

AI in the Real World

Researchers from MIT published one of the most groundbreaking studies in AI in recent years proposing a smarter system for training neural networks.

>Read more in this coverage from MIT News

Google and the US Army published a study challenging the idea that neural networks hallucinate.

>Read more in this coverage of Wired Magazine

A Recent report forecasts that the AI chip market will grow to over $80 billion by 2027.

>Read more in this coverage from Yahoo Finance

CEO of IntoTheBlock, Chief Scientist at Invector Labs, I write The Sequence Newsletter, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store