Some Thoughts About Google’s AI Collaboration Experiment

Alphabet’s subsidiary DeepMind has been conducting a series of experiments to explore the behavior of AI agents in situations suited for collaboration or competition. The experiment attempts to shade some light onto the future of AI on which agents will will to collaborate in order to accomplish specific tasks.

DeepMind designed the experiments around well-known “social dilemmas”. These are a variation of game theory scenarios on which participants on a game can benefit from being selfish but all participants loose if everyone behaves selfishly. The most notorious scenario of this type of problems is the “prisoner’s dilemma. The famous problem states that, after being caught on a robbery, two people are offered to be released if they testify against the other person. Neither participant is aware of the other person’s decision. Alternatively, the robbers could be facing up to 10 years in prison if they refuse to cooperate and the other person testifies against them or up to 5 years if neither party cooperates. Problems similar to the prisoner’s dilemma play an important tole in economic theory.

Some of the initial results of DeepMind’s experiment highlighted that agents were willing to cooperate with each other when plenty of resources were available but quickly turned against each other when resources became scarce. Another important observation was that “agents with the capacity to implement more complex strategies tried to tag the other agent more frequently i.e. behave less cooperatively”.

Even though the observations were based on preliminary experiments, the results are one of the first indicators of how multi-agent AI environments may operate in the future. Thinking about the initial results, I’ve summarized some ideas that may be relevant when thinking about the behavior of AI agents in collaborative/competitive environment.

1 — Group Equilibrium Goals

In order to become more cooperatively, AI agents should be trained to achieve an “equilibrium” state. This means that no participants can benefit from changing strategy at that stage. Some of the work of US mathematician John Nash ( A Wonderful Mind) proved that any game has an equilibrium.

2 — Ethics

As AI agents evolve, ethics should become an important part of the training. Ethics can guide the behavior of AI agents in competitive environments.

3 — Evolution

Evolutionism is one of the main AI schools of thought. AI systems designed under the evolutionist theory assumes that a subset of the population of AI agents in a competitive environment will survive and evolve to be more more efficient.

4 — Goals and Utility vs. Judgment

AI agents are designed to maximize utility on a specific environment. From that perspective, AI agents are expected to do anything they can to increase utility with each decision. However, humans don’t make decisions just factoring utility gains. Judgment is an important part of human decision making processes. As AI agents are trained using data based on historical human decision, they could show early forms of judgement when making decisions in competitive or collaborative environments.

5 — Emotions

An interesting aspect to consider about experiments such as DeepMind’s is the role that emotions such as fear, anger, happiness or others will play on how AI systems make decisions in competitive settings.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store