Some explanation about the role of AutoML and representation learning in predictive models for crypto-assets.

Image for post
Image for post
Source: https://trading-education.com/bitcoin-price-prediction

In a recent webinar, I explored some of the most ambitious ideas we are exploring in our journey of creating predictive models for crypto assets at IntoTheBlock. In general, we regularly evaluate modern deep learning ideas pioneered by large tech companies such as Microsoft, Google, Uber, Amazon and many others and apply those to create models that forecast the price of different crypto-assets. We believe that this type of deep learning models have the best opportunity to discover alpha in crypto-assets is going in the long-run.

Obviously, most of the things we try fails but we continue making incremental improvements and creating more efficient predictive models. Today, I would like to share two new ideas that we have adopted with decent success. …


Google’s Switch Transformer model could be the next breakthrough in this area of deep learning.

Image for post
Image for post
Source: https://arxiv.org/pdf/2101.03961.pdf

I recently started an AI-focused educational newsletter, that already has over 65,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

OpenAI’s GPT-3 is, arguably , the most famous deep learning models created in the last few years. One of the things that impresses the most about GPT-3 is its size. In some context, GPT-3 is nothing but GPT-2 with a lot of more parameters. …


The method expands the concept of a Nash equilibrium by decomposing an asymmetric game into multiple symmetric games.

Image for post
Image for post
Source: https://www.looper.com/238554/the-ending-of-a-beautiful-mind-explained/

I recently started an AI-focused educational newsletter, that already has over 65,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Game theory is an increasingly important aspect of multi-agent machine learning systems. In recent years, machine learning labs have made many interesting contributions to the field of game theory. Today, I would like to bring your attention to a new method that was recently published by Alphabet’s subsidiary DeepMind and that provides a unique way to tackle asymmetric game problems. DeepMind’s breakthrough can have profound implications in modern multi-agent, AI systems that are often modeled as asymmetric games. …


Weekly newsletter that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.

Image for post
Image for post

The Sequence Scope is a summary of the most important published research papers, released technology and startup news in the AI ecosystem in the last week. This compendium is part of TheSequence newsletter. Data scientists, scholars, and developers from Microsoft Research, Intel Corporation, Linux Foundation AI, Google, Lockheed Martin, Cardiff University, Mellon College of Science, Warsaw University of Technology, Universitat Politècnica de València and other companies and universities are already subscribed to TheSequence.

📝 Editorial: Is There a Buddle in the Feature Store Market?

Feature stores have been steadily becoming one of the key building blocks in modern machine learning architectures. Arguably, the concept of a feature store can be traced back to the original Uber’s Michelangelo architecture, having gone all the way from being an imperceptible capability to a booming standalone market in the machine learning space. These days, there is a growing number of venture-backed startups that are building feature store capabilities for machine learning solutions. Just this week, machine learning startup Molecula raised a solid $17.6 …


Three new open source projects that showed a lot of promise to reduce the dependencies on labeled datasets for speech recognition systems.

Image for post
Image for post
Source: https://medium.com/@venkat34.k/role-of-artificial-intelligence-and-machine-learning-in-speech-recognition-e5349873e03

I recently started an AI-focused educational newsletter, that already has over 65,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Speech is the cornerstone of human communication and one of the clearest expression of our superior intelligence compared to other species. Not surprisingly, speech has been one of the core areas of focus on the recent wave of artificial intelligence(AI) systems. Automatic speech recognition(ASR) systems have been one of the most active areas of AI research and development producing remarkable breakthroughs like digital assistants or video analysis solutions. However, building ASR systems remains an incredibly challenging effort mostly due to the large volumes of annotated speech data required to train those models. Any basic ASR system today requires hours of manually annotated audio transcriptions, which are often difficult to obtain or simply not available for many languages. To address those challenges, the ASR community have been steadily trending towards models that are not that dependent on large labeled datasets. This trend is known as self-supervised learning and is one of the most exciting developments in modern deep learning systems particularly in speech recognition. …


Unified VLP can understand concepts about scenic images by using pretrained models.

Image for post

I recently started an AI-focused educational newsletter, that already has over 65,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Understanding the world around us via visual representations of it is one of the magical cognitive skills of the human brain. In some context, the brain can be considered this giant engine that constantly processes visual signals, extracts the relevant knowledge and triggers the corresponding actions. Although we don’t quite yet understand how our brain forms fragments of knowledge from visual representations, the processes are embedded in our education methodologies. When we show a picture of a flower to a baby and tell him it’s a rose, that’s probably enough for the baby to start recognizing roses in the real world, whether they are in a vase or in a garden, one or many, red or white. At the same time is able to answer all sorts of questions related to roses. Our brain has a magical ability to generalize information from visual images and articulate those concepts via language. Recreating that level of scene understanding in artificial intelligence(AI) models has been an active area of research in the deep learning space for the last few years. …


Iterated Amplification uses weak teachers to improve the learning processes in really complex tasks.

Image for post
Image for post
Source: https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-they-understand-20191017/

I recently started an AI-focused educational newsletter, that already has over 65,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Most of the relevant tasks that humans perform during the course of long-term period are hard to specify in objectives. Consider processes such as achieving a scientific breakthrough, designing an economic policy or assembling an NBA-championship winning team. Those processes are hard to model in discrete goals and its performance evaluation is the result of constant experimentation and adjustment. …


Weekly newsletter that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.

Image for post
Image for post

The Sequence Scope is a summary of the most important published research papers, released technology and startup news in the AI ecosystem in the last week. This compendium is part of TheSequence newsletter. Data scientists, scholars, and developers from Microsoft Research, Intel Corporation, Linux Foundation AI, Google, Lockheed Martin, Cardiff University, Mellon College of Science, Warsaw University of Technology, Universitat Politècnica de València and other companies and universities are already subscribed to TheSequence.

📝 Editorial: Transformers Continue Setting New Records

Transformers can be considered one of the most important milestones in the last decade of machine learning. Since Google’s famous paper: Attention is All You Need, the machine learning community has been in a frantic race to adapt this new type of architecture. Models like Google’s BERT, OpenAI’s GPT-3, Facebook RoBERTa and Microsoft’s Turing-NLG have been breaking records. Most of the progress in transformers has been centered in the natural language understanding (NLU) space, but that’s rapidly changing. …


CLIP and DALL·E draw inspiration from GPT-3 to master complex computer vision tasks.

Image for post
Image for post
Source: https://www.rev.com/blog/what-is-gpt-3-the-new-openai-language-model

I recently started an AI-focused educational newsletter, that already has over 65,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Transformers have been widely considered one of the biggest breakthroughs in the last decade of machine learning and OpenAI has been at the center of it. OpenAI’s GPT-3 is, arguably, one of the most famous and controversial machine learning model ever produced. Trained in billions of parameters, GPT-3 is actively used by hundreds of companies to automate different language tasks such as question-answering, text generation and others. With that level of success, it’s only natural that OpenAI continues exploring different flavors of GPT-3 and transformer models. …

About

Jesus Rodriguez

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store