Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:
From the Editor: Do Neural Networks Hallucinate ?
Overfitting is one of the best known challenges of artificial intelligence(AI) applications. Conceptually, overfitting describes the scenario in which neural networks infer patterns from training datasets that don’t exist in the real world. Not surprisingly, overfitting if often compared to hallucinations and has been a well-accepted concept in machine learning systems. Recently, researchers from the Massachusetts Institute of Technology(MIT) published a study that challenges the notion that neural networks hallucinate.
In their groundbreaking study, the researchers showed empirical evidence that the results of many examples in image classifiers that are classified as hallucinations only look like hallucinations to people. In reality, the AI system are identifying tiny details that are imperceptible to the human eye. The explanation is obviously hard to qualify as we use subjective criteria’s to identify objects. When the hallucinated results where analyzed in details, it turns out that they correct based on the specific aspects of the images that were analyzed by the algorithms. From that perspective, a lot of the hallucinations in neural network are more a result of incorrect training datasets rather than incorrect reasonings in the algorithms. Certainly fascinating….
Now let’s take a look at the core developments in AI research and technology this week:
Microsoft Researchers published a paper proposing a Bayesian method for addressing the balance between exploration and exploitation in deep learning systems.
AI researchers from the Massachusetts Institute of Technology developed a new technique to improve the robustness of neural networks for real world scenarios.
Uber published a research paper outlining their implementation of the fascinating Lottery Ticket Hypothesis to improve the training of neural networks.
Cool Tech Releases
Microsoft open sourced InterpretML, a library for improving the interpretability of machine learning models.
An AutoML solution created by Google successfully competed in a Kaggle tournament for structured data analysis.
AI in the Real World
Researchers from MIT published one of the most groundbreaking studies in AI in recent years proposing a smarter system for training neural networks.
Google and the US Army published a study challenging the idea that neural networks hallucinate.
A Recent report forecasts that the AI chip market will grow to over $80 billion by 2027.