Why is it that the bacon you are about to bite into is an acceptable source of food for you, but possibly not so for the person sitting next to you? Perhaps he or she eats according to a religious code, or has a health-related reason for skipping the meat products. Maybe he or she is a proponent of animal welfare and has decided to only eat meat products that are slaughtered “transparently and humanely”; or, it could be that he or she has decided not to eat an animal that is conscious on any level.
Such an introduction is not intended to lambast meat eaters or put vegetarians on a pedestal (or vice versa), but to illustrate the many thought paths that one might take on the road to deciding if an action for or against something is “morally acceptable” or “morally unacceptable”. How do we, as humans, base these moral evaluations?
How Do We Decide Who – and What – is Morally Relevant?
This intellectual sphere is one in which Dr. David Gunkel spends much of his time researching, philosophizing, writing and teaching. We might even label him as a pioneer when it comes to considering these moral equivalencies on a futuristic scale i.e. moral considerations around the treatment of artificial intelligence (AI).
But going back to just ‘how’ humans make up their minds when it comes to taking or leaving the bacon – let’s tap into a concept that Gunkel calls the “properties approach”. Throughout history, humans have decided which entities fit in and out of certain moral circles based on the attributes and features that make up a particular entity; these qualities might include (but are not limited to) language use, awareness, rationality, consciousness, etc. Whether or not an entity climbs higher up the moral ladder often “hinges on matters of degree,” says Gunkel.
Have you spotted the obvious paradox yet? Simply put, these properties are more dynamic than static; they change over time to accommodate societal and technological evolutions. In the Neco Roman period, for example, males owned all the land, and these males had every right – fundamental and moral – to exclude wives and children from certain related rights that would be considered fundamental human rights by today’s Western standards. Women and children were viewed as property, as much as the land was viewed as property, and therefore by moral and legal code were considered less than human beings.
A more modern example goes (once again) back to the bacon dilemma. The debate about whether or not to be a vegetarian based solely on animal rights and ethics is a movement brought into the spotlight by Peter Singer, Tom Regan, and other notable animal rights activists. Once again, we have stepped down a rung to recognize the moral weight and validity of creatures that are, perhaps, less conscious than human, but still sentient and able to experience feelings like pain and pleasure.
Reframing Our Moral Approach
As we move forward into an AI-inundated future, the moral landscape appears fuzzy. Perhaps this is always the case, at least for most of us, as we look ahead into unknown territory. We interact with AI on a daily basis, yet do we consider them entities deserving of certain fundamental rights? Perhaps not yet, but we’ve been known to change our minds late into the game. Might AI one day reach a level of “awareness” that compels us to reconsider our justifications of moral treatment?
David thinks we may be going about the business of moral evaluations in an inherently flawed manner. He contends that there are two main problems with using the properties approach – on both an ontological and epistemological level.
Ontologically, if we make arguments based on the state of “being or existing”, then we end up having to decide which qualities qualify an entity for a higher set of moral rights. How will we know if the bar is raised too high or too low? Who gets the privilege of deciding the answers to these questions? For now, humans get to give the answers, though “those decisions have historically yielded some bad outcomes for humanity”, says Gunkel. While we seem to have gained some enlightenment over time in defining these qualities, the moving of the bar still poses an issue.
Take the treatment of a lobster compared to a dog, for example; can we make the steadfast argument that mammals are more sentient than crustaceans and therefore deserve better treatment? There are many arguments to be made here, of course, but the point is that this approach yields a continuous moral and philosophical struggle within our own and in relation to other species.
We can try to separate and justify our beliefs from opinions, and potentially base moral arguments on the levels of the complexity of internal states. But in the past and even today, we have not been able to directly observe such properties; instead, we look primarily to an entity’s behavior. Empiricists, says Gunkel, will draw their conclusions from physical constructs; however, David argues that since human beings are an entity locked into a specific and biased conscious state, there does not appear to be a way to know beyond any doubt. Behavior is the best on which we have to grow our morals thus far.
Where does AI fit into all of this? The media recently publicized that a chat bot named “Eugene Goostman” passed the annual Turing test. The conclusions are still up for debate, but nonetheless the progress made by this AI presents a new level of moral ambiguity. If we imagine that in the next five or 10 years, scientists create a machine that can “feel” simulated pain or pleasure, how do we then assign causality to behavior, asks Gunkel.
All of this conjecturing leads back to the root of the argument i.e. how we base our moral decision-making. David thinks the first necessary step is to recognize our history of “capricious” judging, which is often based on abstracts like culture, tradition, and individual choice rather than pure rationality. As AI continues to evolve and become more tangible in our daily lives – we soon may have robots in homes as caretakers, for example – we’ll need to start making decisions about how and whether to include or exclude them from certain moral circles. Gunkel believes that now is the time start giving serious thought to and drafting the policy and ground rules for the variations of AI that are already an integrated part of our worlds.