It is this gap that gives rise to the ethical question that is the subject of this essay: How are we to relate morally to social robots-like a machine or more than a machine?īefore attempting to address this question, it is important to define specifically the type of SR that is the focus of this investigation. When we interact with a Social Robot (SR), a “gap” exists between what our reason tells us about the SR (i.e., it is a machine) versus what our experience tells us about the SR (i.e., it is more than a machine). It is this “as if” ( Gerdes, 2016: 276) condition that brings us to one of the most consternating conundrums in the field of robo-ethics today, what Mark Coeckelbergh calls, “the gap problem” ( Coeckelbergh, 2013 Coeckelbergh, 2020c). And even those not designed to be so, nevertheless manage to trigger our empathy, drawing us to relate to them as if they too were, by nature, a “social animal.” 1 Indeed, the social robots of today are not merely functional automatons, they are emotionally engaging humanoids. This is because, in addition to living at a time when human slaves were considered animated tools, he never imagined the sophisticated automatons of the twenty-first century-i.e., social robots, which today come in a vast and growing array of configurations ( Reeves et al., 2020), many designed to be social companions. Interestingly, while Aristotle did actually conceptualize automatons that might replace the slave labor of his day (ibid., 1253b), he did not envision that humans might interact socially with these automatons. So noted Aristotle almost 3,000 years ago. “Man is by nature a social animal” ( Politics, 1253a). I call the approach “The Virtuous Servant Owner” and base it on the virtue ethics of the medieval Jewish philosopher Maimonides. In this article, I review the various positions on this issue and propose an approach that I believe sits in the middle ground between the one extreme of treating Social Robots as mere machines versus the other extreme of accepting Social Robots as having human-like status. Furthermore, such relationships can lead to our being manipulated, to our shunning of real human interactions as “messy,” to our incorrectly allocating resources away from humans, and more. For starters, many maintain that it is imprudent to have “empty,” unidirectional relationships for we will then fail to appreciate authentic reciprocal relationships. Some thinkers actually encourage such relationships. On the other hand, because we innately anthropomorphize entities that behave with autonomy and mobility (let alone entities that exhibit beliefs, desires and intentions), we become emotionally entangled with them. Many thinkers thus apply Kant’s approach to animals-“he who is cruel to animals becomes hard also in his dealings with men”-contending that we must not maltreat robots lest we maltreat humans. Yet, in treating them as such, it is argued, we deny our own natural empathy, ultimately inculcating vicious as opposed to virtuous dispositions. Accordingly, some thinkers propose that we maintain this perspective and relate to Social Robots as “tools”. On the one hand, these machines are just that, machines. There is great promise that these machines will further the progress that their predecessors achieved, enhancing our lives and alleviating us of the many tasks with which we would rather not be occupied. They are being designed to enter our lives and help in everything from childrearing to elderly care, from household chores to personal therapy, and the list goes on.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |