Deontic Logic and Computer-Supported Computer Ethics

Jeroen van den Hoven and Gert-Jan Lokhorst

2002

M. J. van den Hoven & G.J.C. Lokhorst. Deontic logic and computer-supported computer ethics. Metaphilosophy, 33 (3): 376-386, 2002. ISSN 0026-1068.

Abstract. We provide a description and informal analysis of the commonalities in moral discourse concerning issues in the field of information and communications technology, present a logic model (DEAL) of this type of moral discourse which makes use of recent research in deontic, epistemic and action logic, and indicate--drawing upon recent research in computer implementations of modal logic--how information systems may be developed that implement the proposed formalization.

KEYWORDS: computer ethics, intellectual property, privacy, equal access, information responsibility, deontic logic, epistemic logic, logic of action, modal theorem provers

Introduction

In the early eighties philosophers, computer scientists and legal scholars began to think systematically about the ethical issues in computing (Johnson, 1985; Moor, 1985; Bynum, 1985). Most of the issues that were discussed at that time are still on today's research agenda: privacy and data protection, intellectual property in information, responsibility for design and use of information systems, equal access to information.

In addition to the more traditional methods of moral inquiry into these issues, several attempts have been made to utilize computer programs and information systems to support moral reasoning and help us understand moral behavior. If these attempts were to be successful we would be presented with an extraordinary full circle: computer technology would come to the aid of those grappling with the moral problem to which computer technology itself has given rise: computer supported computer ethics. Different research projects along these lines can be distinguished. First, there is computer-assisted game-theory (Danielson, 1992), applied cognitive science (Goldman, 1993) and AI research (see, e.g., Castelfranchi, 2000) which helps us to get a better understanding of moral behavior, its origin, dynamics and rationality. Secondly, computer supported check lists and decision support systems assist moral decision makers in difficult cases in and outside of the field of IT (Gotterbarn & Rogerson, 1999). Thirdly, there are multi-media tools (Cavalier) that help us to study and teach real life cases. Finally, computer systems have been used to implement and execute deontic reasoning (Lee, 1992) both in law and commerce. Deontic models and computer programs may help us to handle the many deontic constraints associated with electronic contracting (buying and selling, promising and authenticating documents) in electronic commerce and E-business (Tan, 2000).

The ethical issues in computing and information and communication technology seem to have little in common. They range from the desirability of software patents to the acceptability of the Communications Decency Act, from the need for genetic privacy to the prospects of cyber-democracy, and from identity theft to the selection of new top level domain names. There are some commonalities however in the moral language used to articulate the moral problems of an information society and to talk about them. In this paper we shall provide a description and informal analysis of the commonalities in moral discourse concerning issues in the field of information and communications technology (section 1). Secondly we present a logic model (DEAL) of this type of moral discourse which makes use of recent research in deontic, epistemic and action logic (section 2). Thirdly we indicate--drawing upon recent research in computer implementations of modal logic--how information systems may be developed that implement the proposed formalization (section 3).

Ethical Issues and Information Technology

Intellectual Property

The central question in the field of intellectual property (IP) is concerned with the justification of IP rights, with the question more in particular "Why should there be IP rights at all, how can we establish their scope, and argue for their application in particular cases? How does ownership of information (and software) justify the rights holder to limit the freedom of others to use the information (software) concerned?" Another part of the IP debates is concerned with more practical questions such as "how are IP rights adequately expressed in laws, regulations and social institutions?."

Property rights in information constitute moral constraints on the actions of others vis à vis the protected information:

(A) If John has an IP right in a particular piece of information X, then Peter ought to have permission from John to acquire, process or disseminate X.

Privacy and Data Protection

The privacy and data protection debate is concerned with the justification of claims to limit access to personal information. There are different moral grounds on the basis of which one can argue that constraints should be placed on the dissemination, processing and acquisition of personal information. One way to characterize data protection rights is to say that they are moral constraints on what persons may do with one's personal information.

(B) If information X is about John and if Peter does not have X then Peter is not permitted to acquire X without John's consent. If he does have X, then he is not permitted to process or disseminate it without John's consent.

Peter ought to have Johns' permission if he wants to acquire, process or disseminate X. He is not free to provide X to others. So it is not permitted to inform himself or others that X, where X is information about a person P, without P's consent.

Equal Access

The central question in the debates usually lumped together under the heading of `the divide between the information haves and the information have-nots,' or `the digital divide,' is that some (types of) information X are so important for individuals that some persons or agencies have an obligation to see to it that individuals are treated equally as far as the availability of and access to X is concerned. Access to X ought to be distributed fairly. This would imply obligations on the part of government for example to supply X to all citizens and to remove impediments that may keep individual citizens from getting X, or it may be the case that if someone has a right to know something, then all have a right to know it. Equal access:

(C) If A is informed about X, then all ought to be informed about X.

Responsibility and Information

Information technology provides us with tools to process information, to acquire knowledge, and to make data available. As with other technologies, the fact that it broadens the range of our actions gives rise to moral questions. Debates about reproductive and nuclear technology revolve around the question whether we should do what what we can do technically in these fields. Information technology draws our attention to moral questions at the intersection of agency, morality and epistemology. Are we responsible for what we and others (do not) know or believe, are we also responsible for the design of our electronic epistemic artifacts and the software that functions effectively as a doxastic policy? Do we have responsibilities to make others believe certain things in certain circumstances? Is it permissible to act in such a way as to affect our knowledge base and that of others in such a way that we can no longer be held accountable for what we do or think?

(D) If John has an information responsibility regarding X, then John has an obligation to see to it that specific others have access to information X.

General Form of Ethical Statements Concerning Information Acts

Consider the following sentences:

  1. A informs B
  2. A tells B that p
  3. A lets B know that p
  4. A shares his knowledge that p with B
  5. A informs B about p
  6. A sends a message to the effect that p to B
  7. A's communications to B indicate that p

The general form of (1)-(7) can be rendered as:

Moral or legal constraints in information contexts may be expressed as follows in general terms as follows:

There are three conceptual ingredients in this type of statements:

Deontic Action Epistemic/Doxastic
The right to get Information
The obligation to see to it that others know
The permission to let someone know
Duty to prevent people from believing falsehoods
The right to remain ignorant

The vocabulary thus used/needed to capture moral talk about actions with respect to information thus comprises:

  1. Agents
  2. Information contexts
  3. Information acts (tokens)
  4. Information actions (types: acquisition, processing, dissemination)
  5. Informational content: propositions
  6. Information relations between agents
  7. Deontic constraints on (information actions of agents standing in) information relations
  8. Revealed or tacit moral justifications for deontic constraints
  9. Deontic operators (obligation and permission) capturing deontic constraints
  10. Epistemic and doxastic operators (knowledge and belief) capturing cognitive states of the agents
  11. Action operators (sees to it that) capturing the actions of agents

In the next section, we will sketch to what extent these notions have been studied in logical terms.

DEAL (Deontic/Epistemic/Action Logic)

There are three classes of notions which play an essential role in the type of discourse in which we are interested here: deontic notions, epistemic notions, and notions having to do with action. All three classes of notions have been intensively studied in philosophical logic.

Deontic Logic

Deontic logic studies the logic of obligation, permission and prohibition. Deontic logic is 75 years old and has been extensively applied in computer science (Meyer & Wieringa, 1993).

Deontic logic has one basic operator: O ("it is obligatory that"). O transforms a well-formed formula A into another well-formed formula OA. An example: if A stands for "John stops for the red traffic light," then OA stands for "it is obligatory that John stops for the red traffic light."

Several other deontic notions can easily be defined in terms of O: PA ("it is permitted that A") = ¬O ¬A, FA ("it is forbidden that A") = O ¬A.

Standard deontic logic has the following axioms and rules of inference:

  1. All classical tautologies
  2. O(A -> B) -> (OA -> OB)
  3. OA -> PA (obligation implies permission)
  4. If A and A -> B are theorems, then so is B (modus ponens)
  5. If A is a theorem, OA is a theorem

Standard deontic logic is a branch of modal logic and has the same kind of semantics (i.e., Kripke-style semantics, characterized by accessibility relations between possible worlds).

Epistemic Logic

Epistemic logic is the logic of statements about knowledge and belief. It is about 40 years old and has been extensively applied in computer science (Meyer & Van den Hoek, 1995; Fagin, Halpern, Moses & Vardi, 1995).

Epistemic logic has two basic operators which cannot be defined in terms of one another, namely

Like O, Ka and Ba transform well-formed formulas into well-formed formulas. An example: suppose again that A stands for "John stops for the red traffic light"--then Ka A stands for "agent a knows that John stops for the red traffic light," whereas Ba A stands for "agent a believes that John stops for the red traffic light."

Standard epistemic logic has the following axioms and rules of inference:

  1. All classical tautologies
  2. Ka (A -> B) -> (Ka A -> Ka B)
  3. Ba (A -> B) -> (Ba A -> Ba B)
  4. Ka A -> Ba A (knowledge implies belief)
  5. Ka A -> A (knowledge presupposes truth)
  6. Ka A -> Ka Ka A, Ka A -> Ka ¬Ka ¬A (optional)
  7. Ba A -> Ba Ba A, Ba A -> Ba ¬Ba ¬A (optional)
  8. Modus ponens (as above)
  9. If A is a theorem, Ka A is a theorem

Standard epistemic logic is a branch of modal logic and has the same kind of semantics (i.e., Kripke-style semantics, characterized by accessibility relations between possible worlds).

Logic of Action

The logic of action is concerned with the logical properties of statements about action. This branch of logic is at least 50 years old, but the most interesting development have occurred comparatively recently (Belnap, Perloff & Xu, 2001). The logic of action has been used in computer science (e.g., in dynamic logic), but the most recent developments in philosophical logic have not yet been applied in this field.

The basic operator of the logic of action as studied by Belnap, Perloff and Xu (2001) is Stit ("sees to it that"). Stit is an operator which transforms a term a and a well-formed formula A into a well-formed formula [a Stit: A]. An example: if a is a term (denoting an agent a) and A stands for "the light is on", then [a Stit: A] stands for "agent a sees to it that the light is on" ("a switches the light on").

The logic of Stit may be axiomatized as follows (Belnap, Perloff & Xu, 2001, ch. 15). We only consider the single-agent case without so-called "busy beavers." Definitions: Aa = A & ¬[a Stit: A], T= A v ¬A.

  1. All classical tautologies
  2. ¬[ a Stit: T]
  3. [ a Stit: A] -> A
  4. [ a Stit: A] -> [ a Stit: [ a Stit: A]]
  5. [ a Stit: A] & [ a Stit: B] -> [ a Stit: A & B]
  6. [ a Stit: [ a Stit: A] & B] -> [ a Stit: A & B]
  7. [ a Stit: A & B] & ¬[ a Stit: B ] -> [ a Stit: A & Ba ]
  8. [ a Stit: ¬[ a Stit: A & B] & Ba ] -> [ a Stit: ¬[ a Stit: A ] & Ba ]
  9. [ a Stit: A ] <-> [ a Stit: A & Ba ] v [ a Stit: A & ¬[ a Stit: A & Ba ] ]
  10. [ a Stit: ¬[ a Stit: A & [ a Stit: B & ¬[ a Stit: B & Ca ]]] & Ca ] -> [ a Stit: B]
  11. [ a Stit: A] <-> [ a Stit: ¬[ a Stit: ¬[ a Stit: A]]]
  12. Modus ponens (as above)
  13. If A <-> B is a theorem, then [ a Stit: A] <-> [ a Stit: B] is a theorem

Extensions to multiple agents (joint agency) have also been studied. Postulate a != b -> ¬[ a Stit: [ b Stit: A]] is especially interesting in this context: an agent a cannot see to it that some different agent b sees to it that A (if a wants A to be the case, he should take care of this himself).

The logic of Stit is surprisingly rich, especially when combined with temporal notions. Some examples (Belnap, Perloff & Xu, 2001, ch. 9):

  1. Could-have [ a Stit: Q] is not equivalent to Might-have-been: [ a Stit: Q]
  2. If yon fellow sees to some state of affairs, then it might have been that the state of affairs not obtain--at that very instant.
  3. If a does something, then it might have been otherwise; i.e., a might not have done it.
  4. There is no reading of "The fact that a person could not have avoided doing something is a sufficient condition of his having done it" on which this claim is both interesting and true.
  5. Invalid: "That we are responsible for some state of affairs implies that it must have been possible for us to have been responsible for its absence."
  6. Invalid: "If a saw to something, then a could have refrained from seeing to it."
  7. Suppose that a sees to it that Q; does it follow that a might have refrained from seeing to it that Q in the sense that there is a co-instantial alternative at which a refrains from seeing to it that Q? (Stit version: does [ a Stit: Q] imply Might-have-been: [ a Stit: ¬[ a Stit: Q]]?) Answer: This implication is valid if and only if there are no "busy choosers."

The logic of Stit is a branch of modal logic (although Stit is a peculiar `antinormal' operator). The semantics are similar to those of standard modal and temporal logic (possible worlds with certain relations between these worlds).

Hybrid Systems

When one wants to formalize the types of expressions mentioned in section 1 in terms of the operators we have mentioned, one quickly runs into "mixed" expressions, containing operators from more than one domain. Some examples:

  1. O (Ka A -> forall x Kx A)
    "it ought to be the case that everybody knows what a knows" (see (C) in sec. * above).
  2. O [ a Stit: forall x Kx A]
    "a ought to see to it that everybody knows that A," i.e., "a has an information responsibility regarding A" (see (D) in sec. * above).
  3. [ a Stit: forall x (Fx -> Bx O A) ]
    "a sees to it that everybody who is F believes that A is obligatory."
  4. ¬(P [ a Stit: A] -> forall x P [ x Stit: A] )
    quod licet Jovi non licet bovi.

Such "mixed" expressions have not yet been very well studied. An exception is the joint logic of Stit and O (Belnap, Perloff & Xu, 2001, part IV). The Stit theorists view the Stit operator as particularly important in deontic contexts because they claim that deontic statements are usually of the form O [ a Stit: A]. In other words, they maintain that such statements usually involve the deontic status of actions rather than states of affairs. (This is the "Tunsollen rather than Seinsollen" thesis from classical ethical theory.)

Combinations of Stit and epistemic operators are only briefly hinted at in Belnap's, Perloff's and Xu's book (2001). Combinations of all three operators are not considered at all. Yet it will be clear that when one wants to express the views of computer ethicists in formally explicable terms, all these three types of operators are needed--and possibly even more. Interesting interactions between the operators might turn out to arise in the full-fledged system, and it hardly needs emphasizing that more work in this intriguing area is desirable.

Implementability

Trying to express one's views in logical terms is worthwhile in any case because it inevitably leads to more clarity than can otherwise be obtained. But trying to express one's views about computer ethics in terms of deontic, epistemic and action logic is particularly attractive because the resulting theories are in principle implementable in computer software. As a result, one can partially relegate one's reasoning to the very machine about which one happens to be theorizing--the computer.

This is especially important when one is reasoning about the deontic and epistemic kinematics of large organizations, employing hundreds of employees and serving thousands of clients or customers, each having their own privileges, responsibilities, duties, obligations and permissions, sources of information and misinformation, abilities and inabilities, and so on. In such circumstances, `manual' reasoning quickly gets out of hand, and computer assistance becomes desirable. How should one delegate responsibilities, safeguard the flow of sensitive information, protect privacy, and so on, in today's complex organizational environments? Reasoning about such issues may be trivial as long as one is only looking at the level of individual agents, but the totality may be of mind-boggling complexity. It is precisely in problems of this type that the computer has traditionally come to the rescue.

What are the concrete prospects in this area? As we have said, all theories we have considered belong to the field of modal logic. So the question boils down to the question to what extent modal logic is implementable. The answer is that the situation is similar to the situation in first-order predicate logic. This calculus is not fully implementable--but thanks to approaches such as those embodied in PROLOG one can go surprisingly far--as far, in general, as needed for practical purposes. The situation in modal logic is beginning to look similar. Much work on the implementability of modal logic has been carried out during the last few years, and more progress has been achieved than one would have thought possible a decade ago. A detailed description of the results would go far beyond the scope of this paper and this book; we refer the interested reader to a site on the world wide web (Schmidt, 2001) for a survey of recent achievements, especially with respect to modal theorem provers.

There is no denying that the software emerging from this field is still in its infancy, no matter how impressive the theoretical background may be. The existing programs are extremely unfriendly to the user and compile and run only under one or two flavors of Unix. As far as the general user is concerned, the field is about as appealing as the Internet was in, say, 1985. But the potential benefits are great. More work in this area seems worthwhile.

Conclusion

A recent report from the Dutch Data Protection Authority ends on the following note:

We conclude with an important piece of wisdom from the cypherpunks. The cypherpunks' credo can be roughly paraphrased as "privacy through technology, not through legislation." If we can guarantee privacy protection through the laws of mathematics rather than the laws of men and whims of bureaucrats, then we will have made an important contribution to society. It is this vision which guides and motivates our approach (Hes and Borking, 1998).

We are motivated by the same vision. But there is an important difference between our approach and the cypherpunks' position. They did not indicate at all how their goal might be achieved, whereas we have a clear view as to how one should proceed. By combining the most sophisticated computer ethics with the most advanced computer programs, based on the most solid results from philosophical logic, the cypherpunks' vision may well come within reach. We still have a long way to go, but there are no unsurmountable obstacles on the horizon.

References

  [1]
Nuel Belnap, Michael Perloff, and Ming Xu. Facing the Future: Agents and Choices in Our Indeterminist World. Oxford University Press, New York, 2001.
  [2]
Terry Bynum, editor. Computers and Ethics. Special issue of Metaphilosophy, vol. 16, no. 4 (1985).
  [3]
Cristiano Castelfranchi. Artificial Liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, vol. 2, no. 2 (2000), pp. 113-119.
  [4]
Robert Cavalier. Multimedia and teaching ethics. http://www.andrew.cmu.edu/user/rc2z/ (no date given).
  [5]
Peter Danielson. Artificial Morality: Virtuous robots for virtual games. Routledge, London, 1992.
  [6]
R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning about Knowledge. MIT Press, Cambridge (Mass.), 1995.
  [7]
Alvin Goldman. Ethics and cognitive science. Ethics 103 (1993), pp. 337-360.
  [8]
Don Gotterbarn and Simon Rogerson. An ethical decision support tool: Improving the identification and response to the ethical dimensions of software projects. Proceedings of ETHICOMP99 (Rome). CD-ROM, DeMontfort University, Leicester.
  [9]
Ronald Hes and John Borking, editors. Privacy Enhancing Technologies: The Path to Anonimity. Registratiekamer, The Hague, September 1998. (Series Achtergrondstudies en Verkenningen, vol. 11).
  [10]
Deborah Johnson. Computer Ethics. Prentice Hall, New York, 1985.
  [11]
Andrew Jones, Robert Demolombe and José Carmo. An application of deontic logic to information system constraints. To appear in Fundamenta Informaticae vol. 34, 2000/2001, 19 pp.
  [12]
Ron Lee. DX: A deontic expert system shell. EURIDIS internal report 92.10.01b, Erasmus University Rotterdam, 1992.
  [13]
John Jules C. Meyer and Wiebe van der Hoek. Epistemic Logic for AI and Computer Science. Cambridge University Press, Cambridge, 1995.
  [14]
John-Jules Ch. Meyer and Roel J. Wieringa, editors. Deontic Logic in Computer Science. John Wiley, Chichester, 1993.
  [15]
Jim Moor. What is computer ethics? In Bynum (1985), pp. 266-276.
  [16]
Renate Schmidt. Advances in modal logic. http://www.cs.man.ac.uk/~schmidt/tools/, 2001.
  [17]
Yao-Hua Tan. A logical model of trust in electronic commerce. Electronic Markets, vol. 10 (2000) pp. 258-263.

Previous | Up | Next

gjclokhorst@gmail.com || July 17, 2015 || HTML 4.01 Strict