TheSequence Scope: Can Machine Learning Write Better Machine Learning?

This is a summary of the most important published research papers, released technology and startup news in the AI ecosystem in the last week. This compendium is part of TheSequence newsletter. Give it a try by subscribing below:

📝 Editorial

Writing machine learning programs remains a relatively subjective process. Given a problem, we trust data scientists and machine learning engineers to select the best models and architectures, but how do we know those are correct? Data scientists have their own preferences and biases which can influence the machine learning models they apply to a specific problem. Emerging thinking in space is that we can use machine learning to build better machine learning models.

Writing machine learning with machine learning is not a problem with a single solution. Methods such as neural architecture search (NAS, covered in Edge#4) try to select the best model for a given problem. Meta-learning focuses on creating models that can “learn to learn” while program synthesis tackles the difficult challenge of finding the best machine learning programs for a specific dataset. Just this week, Google published research outlining TF-Coder, a new framework that can generate TensorFlow tensor transformations from features. One thing is certain: in the future, machine learning will help us write better machine learning.

🗓 Next week in TheSequence Edge

Aug 4, Edge#9: the concept of parallel training; the famous OpenAI research paper that proposes a metric to measure training scalability; deep dive into Horovod, the parallel training framework created by Uber.

Aug 6, Edge#10: the concept of feature extraction; feature visualization method known as Activation Atlases; review of the HopsWorks feature store platform.

To stay up to date and receive TheSequence Edge every Tuesday and Thursday, please consider joining our community. Till August 15 you can subscribe with a permanent 20% discount. Sunday edition of TheSequence Scope is always free.

Now, let’s review the most important developments in the AI industry this week.

🔎 ML Research

Understanding the Success of Deep Learning

Researchers from MIT published a paper providing theoretical insights about what makes deep learning models successful ->read more in this analysis from MIT News

Learning Poker from Scratch

Facebook AI Research published a paper introducing Recursive Belief-based Learning (ReBeL), a reinforcement learning algorithm that achieved human-level performance in poker with minimum domain knowledge ->read more in the research paper


Google Researchers published a paper introducing TF-Coder, a framework that achieves better-than-human performance by generating TensorFlow programs from features ->read more in the research paper

🤖 Cool AI Tech Releases

Model Card Toolkit for model transparency

Google AI introduced Model Cards, which provides a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation. It gives a detailed overview of a model’s suggested uses and limitations ->read more on Google AI blog

Introducing ScaNN for more efficient search

The recently announced open-sourced vector similarity search library (ScaNN) outperforms other libraries by a factor of two. It performed better than eleven other libraries, handling roughly twice as many queries per second for a given accuracy as the next-fastest library ->read more on Google AI blog

PyTorch for Windows

Microsoft is taking ownership of the development and maintenance of the PyTorch built for Windows ->read more in this blog post from the PyTorch team

💬 Useful Tweet

💸 Money in AI

  • Accounting startup Candis raised almost $14 million in its funding round. On the platform for automated accounting and payment processes, the algorithms are used to import files, extract data, approve invoices, and handle exporting. Sounds mundane but saves a lot of time and improves accuracy.
  • Two data-driven construction reporting companies closed rounds this week. $15.9 million for OpenSpace and $16 million for Buildots. Both companies use 360-degree cameras, which are strapped to builders’ and managers’ hats to document the evolution of a site.
  • A dual commercial and nonprofit enterprise AI Foundation closed a $17 million funding round. They develop AI avatars that can replicate one’s personality, can be trained to complete tasks, or play the role of an assistant or an advisor. It can be your own replica or replica of a famous person.
  • Computer vision startup Advertima has raised ~$17.5 million Series A. Their machine learning platform and the in-store sensors help physical retail stores ‘upgrade’ the shopping experience via real-time shopper behavior analytics.
  • AI-powered vision startup Instrumental closed a $20 million Series B. It helps detect manufacturing anomalies, using a combination of cameras and code.
  • Conversation intelligence startup has raised a $45 million round. With AI and its proprietary natural language processing (NLP) algorithms, analyzes calls and extracts insights to help with sales.
  • A serious $51 million round was raised by another AI-powered vision startup this week. Density uses infrared sensors to count people and analyze crowd behavior, helping companies understand which parts of their offices and other spaces are used more and which are used less.
  • Home fitness startup Tempo just closed a $60 million round. Its AI-powered gym uses 3D infrared sensors that scan users’ movements 30 times per second for performance tracking and better feedback.

CEO of IntoTheBlock, Chief Scientist at Invector Labs, I write The Sequence Newsletter, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store