Published inTowards AIInside FrontierMath: An Unprecedented Benchmark for Assessing Advanced Mathematical Reasoning in AIThe benchmark introduces evaluations that take AI mathematical reasoning to a new level.1d ago1d ago
Published inTowards AIMeet Magentic-One: Microsoft’s New Multi-Agent Framework for Solving Complex TasksThe framework is built on the AutoGen framework.Nov 12Nov 12
How Did Google Build NotebookLM’s Cool Podcast Generation Features?The technique combines several models into a comprehensive audio generation approach.Nov 7Nov 7
Published inTowards AIAnthropic New Research Shows that AI Models Can Sabotage Human EvaluationsThe new research proposes a framework for assessing a model’s ability to subvert human evaluations.Oct 28Oct 28
Inside Meta AI’s New Method to Build LLMs that Think Before they SpeakThought Preference Optimization could be the new foundation for “Thinking LLMs”.Oct 22Oct 22
Published inTowards AIInside OpenAI’s MLE-Bench: A New Benchmark for Evaluating Machine Learning Engineering Capabilities…The new benchmark evaluates AI agents in areas such as pretraining, evaluation and others.Oct 15Oct 15
Published inTowards AILearn About Movie Gen: Meta AI Upcoming Video Generation ModelThe new model represents an important milestone in video and audio generation.Oct 71Oct 71
Published inTowards AIInside AlphaProteo, Google DeepMind’s New Model for Next Generation Protein DesignThe new model focuses on the design of protein binders which could have major implications in modeling protein interactions. +Oct 12Oct 12
Published inTowards AIInside EUREKA: Microsoft Research’s New Framework for Evaluating Foundation ModelsThe framework provides an evaluation pipeline as well as a collection of benchmarks for evaluating language and vision capabilities.Sep 23Sep 23
Published inTowards AIInside DataGemma: Google DeepMind’s Initiative to Ground LLMs in Factual KnowledgeThe model comes accompanied by DataCommons, a data repository based on factual data.Sep 161Sep 161