Member-only story

What Borges Can Tech Us About Overfitting and Underfitting in Deep learning Models

Selective forgetting and memory prioritization are key elements of learning.

Jesus Rodriguez
8 min readJul 3, 2020

This is an aggregation of various posts I published last year about overfitting and underfitting

Overfitting and underfitting are two of the biggest challenges in modern deep learning solutions. I often like to compare deep learning overfitting to human hallucinations as the former occurs when algorithms start inferring non-existing patterns in datasets. Despite its importance, there is no easy solution to overfitting and deep learning application often need to use techniques very specific to individual algorithms in order to avoid overfitting behaviors. This problem get even more scarier if you consider that humans are also incredibly prompt to overfitting. Just think about how many stereotypes you used in the last week. Yeah, I know….

Unquestionably, our hallucinations or illusions of validity are present somewhere in the datasets used in the training of deep learning algorithms which creates an even more chaotic picture. Intuitively, we think about data when working on deep learning algorithms but there is also another equally important and often forgotten element of deep learning models: knowledge. In the context of deep…

--

--

Jesus Rodriguez
Jesus Rodriguez

Written by Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...

No responses yet