Member-only story
The Sequence Scope: Improving Language Models by Learning from the Human Brain
Weekly newsletter with over 120,000 subscribers that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.
📝 Editorial: Improving Language Models by Learning from the Human Brain
For the last few years, language models have been the hottest area in the deep learning space. Models like OpenAI’s GPT-3, NVIDIAs’s MT-NLG, and Google’s Switch Transformer have achieved milestones in natural language understanding (NLU) that were unimaginable just a few years ago. However, that generation of models remains just sophisticated machines for predicting the next word given a specific text. The next generation of NLU models is expected to come closer to resembling human cognitive abilities. However, getting there will require a deep understanding of how the human brain processes language, which requires strong collaboration between leading researchers in ML and neuroscience.
Meta AI Research (FAIR) has been one of the top AI research labs embarking on initiatives to understand the human brain and improve NLU models. FAIR announced a long-term collaboration with…