Member-only story

The Sequence Scope: Improving Language Models by Learning from the Human Brain

Weekly newsletter with over 120,000 subscribers that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.

Jesus Rodriguez
4 min readMay 1, 2022

📝 Editorial: Improving Language Models by Learning from the Human Brain

For the last few years, language models have been the hottest area in the deep learning space. Models like OpenAI’s GPT-3, NVIDIAs’s MT-NLG, and Google’s Switch Transformer have achieved milestones in natural language understanding (NLU) that were unimaginable just a few years ago. However, that generation of models remains just sophisticated machines for predicting the next word given a specific text. The next generation of NLU models is expected to come closer to resembling human cognitive abilities. However, getting there will require a deep understanding of how the human brain processes language, which requires strong collaboration between leading researchers in ML and neuroscience.

Meta AI Research (FAIR) has been one of the top AI research labs embarking on initiatives to understand the human brain and improve NLU models. FAIR announced a long-term collaboration with…

--

--

Jesus Rodriguez
Jesus Rodriguez

Written by Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...

No responses yet