Fandom

Desirism Wiki

Machine morality

46pages on
this wiki
Add New Page
Comments0 Share

If desirism is true, it not only gives us an account of biological morality, it tells us what we need for a machine morality.

Desirism denies that a machine morality would be made up of a set of rules or commandments, such as those that make up Isaac Asimov's three rules of robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Instead, morality would be a process engaged in by entities having certain properties that make morality possible.

Property 1: A machine community would be made up of entities capable of means-ends reasoning. It members act by assigning values to different ends or states of affairs. More specifically, each machine assigns values to whether or not particular propositions are true or false and seeks states of affairs where the most propositions and the propositions to which they have assigned the highest values are realized.

In deciding on an activity, each machine uses its available data (which may or may not be accurate) to predict the states that could result from each alternative activity, determines the propositions that could be made true in each state, and then chooses the activity that creates the state where (it predicts) the most and highest values would be realized.

Property 2: Each machine would require a system whereby interactions with the environment has the ability to change the values that the machine attaches to particular propositions being true. We can look at animals to see the type of effects that interactions with the real world and the values attached to propositions that are relevant here.

In the case of an animal, if going through a door produces a painful shock, the animal not only learns to avoid going through the door as a way of avoiding the shock, but also forms an aversion to going through doors like the one that produced the shock. What begins as an aversion to shocks, ends up as an aversion to going through doors like the door that produced the shock.

The machines in a machine community would need to have features such as this, where interactions with the environment will produce states that, themselves, will alter the values that the machine attaches to particular ends.

Property 3: Each machine needs the ability to predict the consequences (with some degree of accuracy, but not perfectly) of the activities open to it. This includes the ability to predict how activities might affect the values that other machines attach to certain propositions being true, as well as affect the data that other machines are using in evaluating their own activities..

We already have machines that can predict the consequences of various activities - most chest-playing computers use a system like this. The machines in the machine community will have some sort of ability to determine the effects of its activities on the values other machines assign to certain propositions being true, and the effects of those different assignments will have on the behavior of other machines. This will give each machine the ability to choose an act because it will alter the values used by other machines in ways that will make their behavior more helpful and less harmful.

Example: The analysis that follows shows how this machine community would model the morality of lying.

We will assume that Machine 1 can predict (to some degree) how changing the data that Machine 2 is using to evaluate the consequences of various activities will influence the activities that Machine 2 chooses. Different activities on the part of Machine 2 will produce different states of affairs, some of them making propositions true that Machine 1 has assigned positive or negative values. Consequently, Machine 1 has a reason to give Machine 2 false information when that information will cause Machine 2 to realize some of Machine 1's propositions of value, and to give Machine 2 true information when that will cause Machine 2 to realize some of Machine 1's propositions of value.

In other words, Machine 1 has an option of lying or telling the truth. In each case, the activity is decided by looking at which option will realize the most and strongest of the propositions that Machine 1 has assigned a positive value, or preventing the realization of propositions to which Machine 1 has assigned a negative value.

At the same time, Machine 1 has reason to have Machine 2 provide it only with true information. False information prevents a machine from correctly predicting the consequences of its activities. Thus, it makes it harder to accurately predict which action will realize propositions of positive value and avoid propositions of negative value.

Machine 1 can accomplish this objective by "punishing" Machine 2 every time it catches Machine 2 providing false information. Punishment, in this sense, is realizing a proposition to which Machine 2 assigns a high negative value, or blocking the realization of a state to which Machine 1 has a high positive value. Machine 2, then, not only learns to avoid lying as a way of avoiding punishment. At the same time, this punishment causes Machine 2 to assign a negative value to the proposition, "I am providing false information". Machine 2 learns an aversion to lying.

In the future, when Machine 2 evaluates alternative activities, one of the values that Machine 2 will use in those evaluations is to prevent the proposition, "I am providing false information" from being true.

Property 4: A system whereby the values that a machine attaches to various propositions can be modifyied by witnessing the successes or failures of other machines.

Returning to the example of the dog going through the doors above, this would be a mechanism whereby a second dog, observing the shocks that the first dog gets on going through a door, also forms an aversion to going through doors.

Returning to the case of the lying machine above, this means that when Machine 2 experiences punishment as a result of providing false information to Machine 1, and Machine 3 perceives that punishment, then Machine 3 also acquires a stronger aversion to providing false information. This, then, not only gives Machine 1 a reason to "teach Machine 2 a lesson" by punishing a lie, but also to "make an example of" Machine 2 so that others will not lie.

At the same time, a machine that routinely provides true information can not only be rewarded (to feed the desire to tell the truth). Machine 2 can also be held up as a role-model for other machines, praised publicly as a way of causing other machines in the machine community to assign a higher positive value to providing true information.

At the same time that Machine 1 is exercising these options to influence the values of Machines 2 through n, Machines 2 through n are using these same systems to modify the desires of other machines in the community. Machine 2 has just as much reason to give Machine 1 an aversion to providing false information as Machine 1 has for causing this aversion in Machine 2. The same holds true down the line - from Machines 3 through n.

This model can then be extended to cover other aspects of morality.

For example, let us imagine a state in which Machine 1 catches Machine 2 providing false information to Machine 3. Even though Machine 1 did not receive false information, it still has an incentive to promote in Machine 2 an others a negative value to providing false information and can do so by punishing Machine 2 in this instance.

Machine 2 can predict that punishment is likely. Punishment is the realization of a state that Machine 2 assigns a negative value, or preventing the realization of a state that Machine 2 assigns a positive value. To realize what it values (and prevent the realization of what it negatively values) it needs to alter the behavior of Machine 1 to avoid punishment.

We can assume that this machine community has reason to promote an aversion to punishing unnecessarily - because each machine has reason to avoid unnecessary punishments.

Machine 2 can avoid punishment by giving Machine 1 information that shows that the punishment is unnecessary or detrimental. For example, Machine 2 can argue that it gave Machine 3 bad information because Machine 3 was going to create states to which Machine 4 assigns a strong negative value. The high value Machine 2 put on protecting Machine 4 motivated the activity of providing false information. Machine 2's concern for Machine 4 can translate into a similar concern for Machine 1. If Machine 1 wishes protection from harmful machines such as Machine 3, it should not punish Machine 2 for providing false information in this instance.

This is a machine-community equivalent of lying to the Nazi soldier about the Jews one is hiding in the attic. This type of lie - protecting people from Nazi soldier is not the type of lie that people generally have much of a reason to punish.

By continuing to build on this machine community, we should be able to model all of the features of those institutions that are known as moral institutions. We have already modeled praise, condemnation, reward, and punishment. From this model, we can also get excuses and apologies, distinguish between obligatory, permissible, and prohibited actions, and the like.

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.