OpenAI Believes that the Path to Safe AI Requires Social Sciences.

Image for post
Image for post
Source: https://www.accenture.com/nl-en/blogs/insights/responsible-ai-with-opportunity-comes-responsibility

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Ensuring fairness and safety in artificial intelligence(AI) applications is considered by many the biggest challenge in the space. As AI systems match or surpass human intelligence in many areas, it is essential that we establish a guideline to align this new form of intelligence with human values. The challenge is that, as humans, we understand very little about how our values are represented in the brain or we can’t even formulate specific rules to describe a specific value. While AI operates in a data universe, human values are a byproduct of our evolution as social beings. We don’t describe human values like fairness or justice using neuroscientific terms but using arguments from social sciences like psychology, ethics or sociology. …


My presentation at the third edition of the BlockDown Conference.

Image for post
Image for post

Earlier this morning, I presented a session at the famous BlockDown Conference which brought together an impressive lineup of thought leaders in the crypto space. For this edition, I focused on our emerging work in DeFi analytics and illustrate some shocking metrics about the space.

The presentation includes 3 main points:

  1. Challenges and opportunities of DeFi analytics
  2. Examples of cool DeFi analytics
  3. A glimpse into the future of intelligence and DeFi

You can see the complete slide deck below:


Causal Bayesian Networks are used to model the influence of fairness attributes in a dataset.

Image for post
Image for post
Source: http://sitn.hms.harvard.edu/uncategorized/2020/fairness-machine-learning/

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

One of the arguments that is regularly used in favor of machine learning systems is the fact that they can arrive to decisions without being vulnerable to human subjectivity. However, that argument is only partially true. While machine learning systems don’t make decisions based on feelings or emotions, they do inherit a lot of human biases via the training datasets. Bias is relevant because it leads to unfairness. In the last few years, there has been a lot of progress developing techniques that can mitigate the impact of bias and improve the fairness of machine learning systems. …

About

Jesus Rodriguez

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store