Member-only story
The Sequence Scope: AGI and Human Alignment
Weekly newsletter with over 120,000 subscribers that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.
📝 Editorial: AGI and Human Alignment
The quest to achieve artificial general intelligence (AGI) is one of the most fascinating endeavors in the entire technology industry and one that is making rapid progress. Many experts believe that we might be just two or three technological breakthroughs away from the first forms of AGI. Models like GPT-3, AlphaFold, and DALL-E clearly exhibit initial signs of intelligence that resembles human cognition. Despite the progress, the quest for AGI is still full of existential challenges, such as the alignment with human values and intent. If we can’t guarantee that AGI systems are aligned with human values and do what humans want, we might be creating systems that pose fundamental risks to humanity.