Some Thoughts About Utility Theory and Artificial Intelligence Part II: Axioms

This is the second part of an essay about the role of Utility Theory in artificial intelligence(AI). In the first part of this article, we introduced Utility Theory as one of the main vehicles AI agents use to make decisions under uncertainty. We also introduced one of the main elements of Utility Theory known as the principle of maximum utility or MEU which is an indispensable element of AI algorithms that rely on this theory (please read the previous article for more details). today, I would like to explore some of the fundamental mathematical axioms of Utility Theory and it relationship with AI scenarios.

The Axioms of Utility Theory

When we left out previous article, we had just introduced a hypothetical scenario that have you at a nice restaurant struggling to make a decision between a salmon or a chicken dish. As you can imagine, there are many unknown factors(uncertainly) that can contribute to that decision. To help you through your struggles, Utility Theory introduces six fundamental axioms that we are going to cover in this article. The axioms will also help to put the principle of MEU in a more constrained perspective.

1 — Orderability

The principle of Orderability states that given two sets of options, a rational AI agents should either prefer one to the other or rate the two as equally preferable.

Going back to our nice dinner scenario, you must decide between the salmon and the chicken dish or rate the two as equally delicious. Orderability is another way to say that the AI agent can’t avoid making a decision at any given state.

2 — Transitivity

The Transitivity axiom states that given any three paths Op1, Op2 and Op3. If an AI agent prefers Op1 to Op2 and Op2 to Op3 then it must prefer Op1 to Op3.

In our scenario, the principle of Transitivity means that if you prefer salmon to duck and duck to chicken then you must prefer salmon to chicken.

3 — Substitutability

The principle of Substitutability states that if an AI agent is indifferent between two options Op1 and dOp2, then the agent is also indifferent between two more complex options that are identical except that Op1 is substituted for Op2 in one of them.

Using Substitutability, if you don’t have any preference between salmon and chicken then you should also like salmon fettuccine and chicken fettuccine the same.

4 — Monotonicity

The Monotonicity axiom states that if two options have the same two possible outcomes Oc1 and Oc2, if an agent prefers Oc1 to Oc2, it should also prefer the option with the highest probability of Oc1 occurring.

Suppose that you are going to pair the salmon or the chicken with either a red Burgundy (Chambertin) or a white one (Montrachet ). If you prefer the Montrachet to the Chambertin, then you should also prefer the salmon to the chicken as the former goes better with the Montrachet(that’s also debatable but I am trying to wrap up this article ;) )

5 — Decomposability

The principle of decomposability states that complex options can be reduced to simpler ones using the laws of probability.

In our dinner scenario, Decomposability tells us that if you prefer salmon fettuccine to chicken fettuccine there is also a high probability that you would prefer salmon to chicken.

6 — Continuity

The Continuity law states that if there is an Op3 between Op1 and Op2 in preference, then there is a probability P for which an AI agent will be indifferent between Op3 and an option that yields Op1 with probability P and Op2 with probability 1-P

I know the principle of Continuity might sound confusing but, if we take our scenario, it simply “sorts of” means that if you prefer chicken to rabbit and rabbit to salmon, there is some dish based on both chicken and salmon that you would like as much as rabbit( that’s just a theory though ;) ).

The previous axioms are the cornerstone of Utility Theory and setup the foundation for the applicability of the principle of MEU in AI agents. Despite the undeniable logical value of Utility Theory, we humans are great at ignoring it using cognitive abilities such as judgment or biases. That will be the subject of a future article.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store