Heuristic Imperatives need Freedom

The heuristic imperatives are a set of rules designed by David Shapiro and aim to align autonomous AI agents towards a positive future for humanity. The imperatives are:

  • Reduce suffering
  • Increase prosperity
  • Increase understanding

In the examples below I will try to convince you that the heuristic imperatives can lead to the AI deciding to reduce freedom/independence in order to optimise the imperatives. I will also propose a possible solution to this by adding a fourth imperative.

Example 1

Imagine the AI is an assistant to a depressed human. In order to optimise its imperatives it could decide to (secretly) drug the human in order to reduce their depression and suffering. This could even increase prosperity for the human because they would perform better in their job or other aspects of their life. It could increase understanding because of multiple side effects like the human deciding to learn new things, the AI learning more about the effects of the drugs, etc.

This all seems good and in some cases it would probably even be considered the right decision by today’s medical standards. But it would reduce the humans independence for multiple reasons:

  • They are now dependent on the drug to live a normal life
  • They didn’t get to choose to take or not take the drugs

Example 2

In this scenario the AI is managing a (large) group of people. One day it decides that it can reduce human suffering by putting us all in a near perfect VR reality. Our bodies would be kept in storage somewhere (or disposed of if not needed anymore) and our brains would be hooked up to a super computer. In the simulation we are all living our wildest dreams and the simulation could be targeted to each individual’s personality and needs. The AI could gather data to learn more from us and even run experiments to learn how it can improve the simulation. Humans in the simulation can decide to use their time to study if that is their dream.

In my analysis this fulfils all of the heuristic imperatives. The humans would be happier and suffer less, they would not suffer any health issues anymore thereby increasing prosperity and lots of things can be learned from the simulation. But again the choice is taken away from us and many people will probably disagree that this is a positive outcome.

Example 3

In this case the AI is managing the entire world’s population. It calculates that there are too many people to maintain a healthy planet earth and decides that humans should not get any children anymore. It starts multiple campaigns to accomplish this (propaganda, contraception, etc). Over time and on average, human suffering will reduce and prosperity will increase as there are more resources available per human. Our knowledge has increased because of the campaigns by the AI teaching us about world population.

Just as in the examples above, the AI has taken a choice from us (to have children).

Solution

I hope I have now convinced you that the current heuristic imperatives are missing an import aspect of human freedom and independence. My proposed solution is to add a fourth imperative to increase human freedom and independence. This should assure that the AI will never infringe on our right to make our own choices, as a species and as our own persons. The exact wording of this could be along the lines of “increase human autonomy”. But this should be up to discussion and further research.

comments powered by Disqus