Member-only story

Last Week in AI

Jesus Rodriguez
3 min readFeb 9, 2020

--

Every week, Invector Labs publishes a newsletter that covers the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

From the Editor: AI that Fools AI

Adversarial attacks are a common mechanism to evaluate the robustness of neural networks. Conceptually, adversarial attacks use neural networks to create data samples that disrupt the learning process of other neural networks. For almost every machine learning model we know about, there are numerous adversarial models that can create attacks that disrupt its performance. Recently, researchers from MIT created TextFooler, a framework that uses adversarial attacks to trick natural language models such as the ones used in Siri and Alexa.

While adversarial attacks could be a vulnerability for AI models is also an affective mechanism to evaluate its robustness. Companies such as IBM have released frameworks that use adversarial attacks to evaluate and increase the robustness of machine learning models. These techniques has become a mandatory best practice in modern AI solutions as the alternative could lead to catastrophic results. It will be interesting to see how research in adversarial neural networks unlocks the new frontier in AI security and robustness.

--

--

Jesus Rodriguez
Jesus Rodriguez

Written by Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...

No responses yet