Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

From the Editor

Can AI models be hacked or manipulated by bad actors? Generative adversarial neural networks(GANs) is a prominent AI technique in which two neural networks compete against each other in order to improve knowledge. While GANs have many uses in mainstream deep learning scenarios, they can also be used to trick neural networks into producing certain outputs. For instance, a neural network can perform imperceptible pixel-level manipulations on an image dataset that will completely change the output of a image analysis model.

Testing robustness against adversarial attacks is a new area that is gaining relevance within the deep learning community. Last year, IBM AI researchers open sourced the Adversarial Robustness toolbox as a resource to help data scientists to evaluate different adversarial attacks and defenses for a given neural network. This week, IBM published two new papers in the area of adversarial attacks in deep learning scenarios. Like any new technology trend, deep learning is opening the door to new forms of security attacks that we haven’t seen before. Evaluating the robustness of deep learning models to adversarial attacks will become increasingly more relevant in data science solutions.

Now let’s take a look at the core developments in AI research and technology this week:


IBM Research published two different papers about adversarial attacks in neural networks

>Read more in these blog posts from IBM Research about measuring adversarial robustness and about attacks against convolutional neural networks.

Google AI researchers published a paper introducing Transformer-XML, a new architecture that enables natural language understanding beyond a fixed-length context.

>Read more in this blog post from the Google AI team

Facebook AI Research(FAIR) unveiled ZeroSpeech 2019, a challenge for AI models that learn speech like children do.

>Read more in this blog post from the FAIR team

Cool Tech Releases

Uber open sourced AresDB, a new real time analytics engine that leverages GPUs for scalability and parallelization.

>Read more in this blog post form the Uber engineering team

IBM Research released a new dataset for increasing fairness in machine learning models

>Read more in this blog post from IBM Research

Microsoft announced a series of minor releases focused on helping customers streamline the adoption of AI solutions.

>Read more in this blog post from Microsoft Research

AI in the Real World

The U.N. World Intellectual Property Organization published a study that indicates that The United States and China are leading the race for AI.

>Read more about the study in this coverage from Reuters

Researchers from Columbia University have developed an AI model that can translate brain activity into words.

>Read more in this coverage from Fortune

Researchers from the Massachusetts Institute of Technology(MIT) developed a model that can identify when autonomous AI systems are prompt to cause dangerous errors in the real world.

>Read more in this article from MIT News

CEO of IntoTheBlock, Chief Scientist at Invector Labs, I write The Sequence Newsletter, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store