Tuesday, October 29, 2019

Ethical principles for assessing new technologies


Technologies and technology systems have deep and pervasive effects on the human beings who live within their reach. How do normative principles and principles of social and political justice apply to technology? Is there such a thing as "the ethics of technology"?

There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds., Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology -- genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?

There is an overriding fact about technology that needs to be considered in every discussion of the ethics of technology. It is a basic principle of liberal democracy that individual freedom and liberty should be respected. Individuals should have the right to act and create as they choose, subject to something like Mill's harm principle. The harm principle holds that liberty should be restricted only when the activity in question imposes harm on other individuals. Applied to the topic of technology innovation, we can derive a strong principle of "liberty of innovation and creation" -- individuals (and their organizations, such as business firms) should have a presumptive right to create new technologies constrained only by something like the harm principle.

Often we want to go beyond this basic principle of liberty to ask what the good and bad of technology might be. Why is technological innovation a good thing, all things considered? And what considerations should we keep in mind as we consider legitimate regulations or limitations on technology?

Here we can consider three large principles that have emerged in other areas of social and political ethics as a basis for judging the legitimacy and fairness of a given set of social arrangements:
 A. Technologies should contribute to some form of human good, some activity or outcome that is desired by human beings -- health, education, enjoyment, pleasure, sociality, friendship, fitness, spirituality, ...
B. Technologies ought to be consistent with the fullest development of the human capabilities and freedoms of the individuals whom they affect. [Or stronger: “promote the fullest development …”]
C. Technologies ought to have population effects that are fair, equal, and just.
The first principle attempts to address the question, "What is technology good for? What is the substantive moral good that is served by technology development?" The basic idea is that human beings have wants and needs, and contributing to their ability to fulfill these wants is itself a good thing (if in so doing other greater harms are not created as well). This principle captures what is right about utilitarianism and hedonism -- the inherent value of human happiness and satisfaction. This means that entertainment and enjoyment are legitimate goals of technology development.

The second principle links technology to the "highest good" of human wellbeing -- the full development of human capabilities and freedoms. As is evident, the principle offered here derives from Amartya Sen's theory of capabilities and functionings, expressed in Development as Freedom. This principle recalls Mill's distinction between higher and lower pleasures:
Mill always insisted that the ultimate test of his own doctrine was utility, but for him the idea of the greatest happiness of the greatest number included qualitative judgements about different levels or kinds of human happiness. Pushpin was not as good as poetry; only Pushkin was.... Cultivation of one's own individuality should be the goal of human existence. (J.S. McClelland, A History of Western Political Thought : 454)
The third principle addresses the question of fairness and equity. Thinking about justice has evolved a great deal in the past fifty years, and one thing that emerges clearly is the intimate connection between injustice and invidious discrimination -- even if unintended. Social institutions that arbitrarily assign significantly different opportunities and life outcomes to individuals based on characteristics such as race, gender, income, neighborhood, or religion are unfair and unjust, and need to be reformed. This approach derives as much from current discussions of racial health disparities as it does from philosophical theories along the lines of Rawls and Sen.

On these principles a given technology can be criticized, first, if it has no positive contribution to make for the things that make people happy or satisfied; second, if it has the effect of stunting the development of human capabilities and freedoms; and third, if it has discriminatory effects on quality of life across the population it effects.

One important puzzle facing the ethics of technology is a question about the intended audience of such a discussion. We are compelled to ask, to whom is a philosophical discussion of the normative principles that ought to govern our thinking about technology aimed? Whose choices, actions, and norms are we attempting to influence? There appear to be several possible answers to this question.

Corporate ethics. Entrepreneurs and corporate boards and executives have an ethical responsibility to consider the impact of the technologies that they introduce into the market. If we believe that codes of corporate ethics have any real effect on corporate decision-making, then we need to have a basis in normative philosophy for a relevant set of principles that should guide business decision-making about the creation and implementation of new technologies by businesses. A current example is the use of facial recognition for the purpose of marketing or store security; does a company have a moral obligation to consider the negative social effects it may be promoting by adopting such a technology?

Governments and regulators. Government has an overriding responsibility of preserving and enhancing the public good and minimizing harmful effects of private activities. This is the fundamental justification for government regulation of industry. Since various technologies have the potential of creating harms for some segments of the public, it is legitimate for government to enact regulatory systems to prevent reckless or unreasonable levels of risk. Government also has a responsibility for ensuring a fair and just environment for all citizens, and enacting policies that serve to eliminate inequalities based on discriminatory social institutions. So here too governments have a role in regulating technologies, and a careful study of the normative principles that should govern our thinking about the fairness and justice of technologies is relevant to this process of government decision-making as well.

Public interest advocacy groups. One way in which important social issues can be debated and sometimes resolved is through the advocacy of well-organized advocacy groups such as the Union of Concerned Scientists, the Sierra Club, or Greenpeace. Organizations like these are in a position to argue in favor of or against a variety of social changes, and raising concerns about specific kinds of technologies certainly falls within this scope. There are only a small number of grounds for this kind of advocacy: the innovation will harm the public, the innovation will create unacceptable hidden costs, or the innovation raises unacceptable risks of unjust treatment of various groups. In order to make the latter kind of argument, the advocacy group needs to be able to articulate a clear and justified argument for its position about "unjust treatment".

The public. Citizens themselves have an interest in being able to make normative judgments about new technologies as they arise. "This technology looks as though it will improve life for everyone and should be favored; that technology looks as though it will create invidious and discriminatory sets of winners and losers and should be carefully regulated." But for citizens to have a basis for making judgments like these, they need to have a normative framework within which to think and reason about the social role of technology. Public discussion of the ethical principles underlying the legitimacy and justice of technology innovations will deepen and refine these normative frameworks.

Considered as proposed here, the topic of "ethics of technology" is part of a broad theory of social and political philosophy more generally. It invokes some of our best reasoning about what constitutes the human good (fulfillment of capabilities and freedoms) and about what constitutes a fair social system (elimination of invidious discrimination in the effects of social institutions on segments of population). Only when we have settled these foundational questions are we able to turn to the more specific issues often discussed under the rubric of the ethics of technology.

No comments: