Tech on Earth Episode Transcripts

Below you will find transcripts for episodes of the Tech on Earth podcast. More information about the episodes can be found on the podcast’s homepage.

While we make every effort to produce highly reliable transcripts, if you want to quote from one of our episodes, particularly the words of our guests, please listen to the audio whenever possible. Thank you.

4. Virtue Ethics and Technomoral Futures

Guest: Dr. Shannon Vallor (University of Edinburgh)

Transcript

 

Elizabeth Renieris  0:02  
Welcome to Tech on Earth, a podcast aimed at bringing a practical lens to tech ethics around the world. I'm Elizabeth Renieris, founding director of the Notre Dame-IBM Technology Ethics Lab at the University of Notre Dame. Today, I'm so pleased to be joined by Dr. Shannon Vallor, professor of philosophy, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, and director of the Centre for Technomoral futures at the University of Edinburgh. Dr. Vallor, welcome to the show.

Shannon Vallor  0:30  
Thanks very much for having me. Excited to be here.

Elizabeth Renieris  0:32  
So I'd like to begin with your personal journey to our conversation today. Tell us, how did you come to research and write about technology ethics as a philosopher and what drew your interest to this topic in the first place?

Shannon Vallor  0:44  
That's a great question. Because a lot of people may not know that philosophers have actually been sort of late to the party, historically speaking, in thinking about technology, which is, one of the things that interests me is a sort of long-standing philosophical prejudice against thinking seriously about technology. There are a few minor exceptions. But starting basically from Plato onward, one of the topics that philosophers have historically neglected is the role of technology and the built world in shaping what it is to be human. So when I was a grad student studying philosophy, studying the philosophy of technology or the ethics of technology was not really an option in the formal sense in most university philosophy programs. There were only a handful in the United States, for example, that would offer a graduate level course in the philosophy of technology. So I was actually trained as a philosopher of science. But I was really interested in the role of scientific instruments in the way that scientific knowledge is produced. So I was already thinking about technology a lot. I was someone who grew up in the first generation of PCs in the early '80s, and video games, so tech was always something that interested me since I was a child. So when I was studying the philosophy of science and looking at the role of technologies in scientific thinking, I wasn't planning to really broaden out into the ethics of emerging technologies. But I just so happened to teach a course on science, technology, and society at Santa Clara University, where I was working at the time. And as just one unit in the course, I chose it to focus on the ethics of technology and philosophy of technology, and particularly of emerging technologies. And the way my students came alive during that particular section, and the way that they not only were intellectually engaged, but tying it directly to struggles that they were having in their own lives with the ways that technologies like Facebook and smartphones were mediating their relationships and their speech and their moods and their attention. That was already something even in the early, you know, first decade of the century that was beginning to become apparent. And I realized that there were so many rich philosophical problems that I wanted to better understand in this area. So I began reading and writing more deeply about the role of technology in human experience, and particularly the ways that emerging technologies were changing the human experience and our human habits and capabilities. And then that was all she wrote, from then on, that was really my focus and remains that today.

Elizabeth Renieris  3:47  
Yeah, and it's so interesting to mention this personal dimension and the really lived experience of technology because I think we've all been acutely aware of that, you know, through the course of the last few years with the pandemic. And this closeness to the technology is highlighting, I think, both you know, the positive and negative aspects of it. But just for level-setting and thinking about how I'd like to frame this conversation, so you are an expert in a tradition known as virtue ethics, and you authored a widely renowned, influential book on the topic, Technology and the Virtues. For those who aren't familiar with it, for a general audience, could you give us a brief overview of what virtue ethics is and how it relates to AI and other new and emerging technologies?

Shannon Vallor  4:27  
Sure, absolutely. So virtue ethics is one of several different philosophical traditions of thinking about and talking about ethics. And what's distinct about it--and I also have written about the fact that virtue ethics resonates across different cultural traditions and appears not only in Western philosophy, for example, in Aristotle, and in the Catholic moral tradition as well as more contemporary versions of virtue ethics, but virtue ethics also appears as a frame in the Confucian ethical tradition, and in others, as well. So it's a particularly old, and if not universal, at least widely culturally accessible way of thinking about morality. And it really is rooted in the notion of character, and the way in which our actions and our habits shape our moral character.

So the central intuition of virtue ethics is that you are not made good by a single action, and you're not born good, but you gradually become good if you are able to do this through a practice, through a process of moral self-cultivation, where you build moral strengths, moral skills, moral dispositions through repeated practice in the real world in moral situations that call upon you to act wisely and well. And the idea of virtue is that virtues are precisely these strengths of character that we gradually build up over time. And I gravitated to it very early on simply because I think it speaks to a truth of moral experience that most of us can recognize, which is that we, in the course of our lives, learn to become better through experience, and not by consulting a theory, not by reading a list of moral rules that we carry around with us, but learning from the moral experiences that we've had and gradually strengthening our moral capabilities, and also our moral intelligence or moral wisdom. So I was really interested in this as a way to think about technology precisely because what technologies do when they are new is they transform our habits, they transform the things that we do every day. And virtue ethics says that it's precisely the things that you do every day that determine the shape of your character and your ability to live well with others. So it's your moral habits and the way you embed those in daily practices with others that allows you to become a good or wise person and to flourish in whatever social environment that you're in, and to help others flourish with you. And so if technologies radically reshape our moral habits, then they have the potential to either greatly improve or greatly diminish our capabilities for living well with one another.

Elizabeth Renieris  7:53  
Yeah, and in your book, I believe you call these "technomoral virtues." And in fact, now you direct a center bearing the same term, the Centre for Technomoral Futures. So for those who aren't familiar with the term, what do you mean by technomoral, and why use that specific framing?

Shannon Vallor  8:07  
Yeah, that's another great question. So one of the things that I think has really been a misfortune for the human family is the way that our framing of technology and morality has tended, for a very long time, to treat these as separate and independent aspects of the human experience. And as a philosopher of technology, I benefited from reading a number of philosophical works that taught me how misguided the idea of this separation is. So the separation is usually framed in this way, that morality is about values, about judgments that we make about what's good or bad, and that technologies are just things that are neutral, that they have no values, that they are just instruments that can be used for good or bad purposes, but that those good or bad uses are separate from the technology and what it is. So that's what we call the thesis of technological neutrality. And that's a falsehood. It is deeply and deeply mistaken. And the reason for that that's quite clear when you think about it, is that humans build technologies for reasons. We don't innovate randomly or arbitrarily, we design things to meet perceived or genuine needs, to satisfy things that we think ourselves or others will want. And we therefore are already injecting values into the very process of innovation that creates and shapes technologies. So they have values baked in from the very beginning.

But in addition to that, technologies also change the way we relate to one another. And in doing so, technologies become themselves influences on our values. Technologies make it easier to do certain things, and harder to do other things. They make it more likely for certain things to happen, sometimes in ways that we didn't intend or design in advance. So values are baked into technologies in the very beginning when we conceive of them, and then they take on a new value shape as they interact with us in our social environment. And they even act upon our own values and reshape them. Think about the ways that social media technologies have reshaped the values around communication, the values around privacy and sharing, the values around one's image and how one presents that to others. There's a numerous range of changes to human values that have been effected by social media and other technologies. 

So you can't separate technologies from values. But it's also true that you can't separate morality from technology. Because one way of thinking about morality is that it's a kind of a technique, a social technique for managing complex and contested human relationships. We are social creatures, but we're also competitive creatures, we have a fair amount of aggression in our nature, and yet we're highly interdependent and vulnerable to one another, which means that we need ways of managing these social complexities in such a fashion that we can have stable societies that flourish. And ethics is a technology for doing just that. And it's one of the oldest technologies we have. So it's really important, then, for me, to realize that this habit that we have of treating technology and morality as entirely independent, separate areas of study or interest is actually part of the problem of why our society is struggling right now to align innovation and economic growth and scientific progress with social and political flourishing. It's precisely because we have separated these forms of knowledge artificially, and they need to be brought back together. So I talk about technomoral virtues, I talk about technomoral futures, because I want to remind us that until we begin to understand the integration of technology with our values and the mutually dependent relationship of these domains, we won't be able to solve the problems that are facing us today.

Elizabeth Renieris  12:52  
It's so interesting you say that because the whole genesis of this podcast was really about exploring how religious and secular ethical traditions from around the world can or should inform technology ethics. And to your point that they really have a lot to say about this, even some of the older traditions, as you point out, and one of the things that I loved about your book, and one of the reasons we were so keen to have you on the show, is that it pulls from those different ethical traditions you mentioned, including Aristotelian, Buddhist, Confucian ethics. I guess the question would be how virtue ethics integrates these traditions, and why that matters to future ethical challenges posed by technology. So I'm thinking, you know, does this suggest we need a kind of global morality that integrates these? How do you think about that?

Shannon Vallor  13:36  
Yeah, that's a, it's a difficult question. And here's the best way that I can answer it. So I don't think it's realistic or even desirable to strive for a single, universal ethical framework or set of norms, something that would extinguish all local and cultural difference in the way that we govern our societies or the way that we value our social practices. So I'm a pluralist in that sense, that I think that there are many possible ways to live well together. And they are not all reducible to one pattern or script. That being said, there are definitely some patterns of living that are, in a sense, morally off the table because they undermine our humanity and our capability to flourish as the kinds of dependent social animals that we are. So there's certain kinds of moral patterns that we have to reject because otherwise we destroy one another. But that leaves a lot of territory that can be developed in different ways.

That being said--okay, so I'm a pluralist about morality, and I think there are multiple moral worldviews that we need to accept--at the same time, we are facing, as a species, a set of collective action problems at a global scale unlike any we've ever encountered. We might think that the very first of these was the crisis of nuclear armaments and the potential that arose in the 20th century to exterminate the human race through the application of nuclear technology to warfare, and the political and scientific labor that had to be done and still needs to be done--and is still, as we know, especially these days, remains incredibly fragile, that consensus that we must never allow these technologies to destroy the human family. That requires cooperation across different systems of moral understanding. That requires the ability to translate certain moral concepts across national, ethnic, religious, and cultural boundaries. And if we think about the climate crisis or the pandemic--and not even this pandemic, but the next one, which might even be harder for us to manage globally--all of these are presenting existential threats to the human family, where there is a very real question about whether the human family will still exist in 150 or 200 years. If we don't manage these collective moral action problems, we very well may not. And there is no ethical duty higher than to preserve life and the possibility of moral experience itself on this planet. So we have to solve these problems, which are not local problems, but global problems.

So what I argue for in my book is that we need to be able to have a way of thinking about ethics that allows for different representations of a good life, but that also is translatable into different cultural frames. And virtue ethics seems to me to be by the historical evidence the moral understanding that most readily allows for this kind of translation. Because what's common across virtue ethics is this notion of moral practice, this notion of practical wisdom that can be cultivated through shared moral experience, and the virtues even themselves, although they do differ in different cultural contexts. We do find these repeated patterns, where notions of courage, honesty, compassion, generosity, humility, wisdom--these virtues do get spoken to by different cultural traditions at different historical periods. They may be expressed differently, in different kinds of norms or actions, but there is a common resonance that I think speaks to our shared human inheritance, and our shared biology, and our shared need for one another. So I want to use virtue ethics as a scaffold for these kinds of collective action challenges that we need to meet. It's a way of kind of climbing up to those challenges together and trying to tackle them.

Elizabeth Renieris  18:15  
Yeah, that's really interesting that you mentioned the sort of future inheritance. I think one of the challenges we have around the collective existential threats is the temporal dimension, right, of sort of negative effects materializing perhaps down the line or with future generations. I know one of the concepts you've talked about and written about is this idea of moral debt. I was hoping we could examine what that means in the specific context of AI, for example, and whether you could maybe give us a practical example from an industry or a specific technology.

Shannon Vallor  18:44  
Sure. That's a great question. Let me see if I can explain first what I mean by moral debt in the technical context, and then maybe we can talk about what it means more broadly for your question about our obligations to future generations. So the concept of moral debt is actually borrowing from an engineering concept of technical debt that we talk about in the software development context, where there are certain things you can do to make a software program more stable over time, easier to maintain and repair, things like documentation of the changes to the code that you're making along the way and so forth. But doing some of these things also slows down the work. So there's often a pressure to cut back on some of that kind of stability infrastructure in our coding practices in order to get a product out to market faster, for example. But that creates what we call technical debt, where now the system has been delivered, but it's not as stable and it's not as easy to repair as it could otherwise have been. And it might have more weaknesses in it that we haven't documented. And what that tends to do over time is pile up with interest. So just like financial debt accrues interest, technical debt accrues interest where the software can become so unwieldy and messy and unworkable and unstable that the costs to fix it later are far higher than if the work had been put in in the earlier stages. 

So what I wanted to argue is that technologies also add to our moral debt. And I think moral debt is something that societies accrue when we neglect moral problems in ways that we can tolerate at the moment but are going to accumulate interest over time--interest in the sense both of economic expense and its social costs. So think about things like the unsustainability of our environmental practices and pollution. Think about the ways that in the '60s and '70s, we knew we were polluting the oceans, we knew we were using up scarce and nonrenewable resources. But we could afford to do it at the moment, right? But now, the costs are ballooning for us in such a way that the expense of building a sustainable world is now terrifyingly large. Whereas had we had more sustainable practices a hundred or 50 years ago, the overall cost to us socially and economically would have been far lower. Think about economic inequality, and the way we've allowed that to spiral up to the point where it is a threat to the stability of many countries and the stability of the international community as a whole. Well, that's another example of allowing moral debt to pile up. We knew that growing social inequality was going to be a problem back in the 1970s and 1980s, when the trajectory started, and we did nothing about it. And now the costs politically and economically have grown exponentially.

What I think AI is doing potentially is creating yet another sort of moral debt. And it's a combination of the technical debt and the moral debt. So basically, AI systems are being used as Band-Aids or sort of easy technical fixes for big social problems, like the problems involved in distributing public benefits in a fair and equitable way, or the economic challenges of keeping competitive with the global economy. A lot of businesses, a lot of governments are rushing to use AI to save time and save costs, but they're often implementing AI in ways that are not robust, not particularly safe, that tend to amplify social injustices and inequalities. And those costs are going to come due; those costs always come due. 

And so what I want to get people thinking about is how we can develop AI and other new technologies in ways that are more morally solvent, right, if you use the--that don't begin at the very start of an implementation of a new AI system with this mountain of moral debt that's only going to grow and become unmanageable in a short amount of time. And I have plenty of examples of this, and they don't all involve AI; sometimes they involve relatively simple algorithmic tools. So if you look at the ways that many agencies in the United States, government agencies, have implemented algorithms to automate the process of fraud detection. So in Michigan, a system called MiDAS was implemented a number of years ago that was not properly developed or built with the kinds of guardrails that were needed to make sure that the way that it was flagging cases, applications for unemployment insurance, as fraudulent was in fact accurate. Because it turns out it was actually running at a false positive rate of over 75% for several years, meaning the vast majority of the people that it flagged as committing fraud were innocent. And yet the system was allowed to run for cost reasons in an automated manner where it was generating fines and generating letters, accusing people of fraud, cutting people off from the benefits to which they were entitled as citizens. And, you know, tens of thousands of people had their rights denied and their livelihoods damaged by the application of the system. And the government of Michigan is still, I believe, going through the legal process of paying out for this improperly implemented system. So it's an example of both technical debt and moral debt that also compounded people's distrust in their government, people's distrust in technology, and that has ripple effects and costs down the line, as well. So my argument with respect to AI is that if AI is going to be of any use to us in the long term, we need to start paying attention to the moral debt we're accumulating with these careless applications of it. And we need to develop it in a way that's responsible and accountable and contestable. And that's the only way to prevent it from amplifying the moral debt that we're already carrying at unsustainable levels.

Elizabeth Renieris  25:51  
Yeah, and for companies and governments or other stakeholders who are designing and deploying these AI systems who are concerned, right, and who don't want to accumulate these levels of moral debt, are we talking about something like ethics by design? Or does it go beyond that? And perhaps how does virtue ethics inform this approach?

Shannon Vallor  26:08  
Yeah. I think we need a multi-pronged approach. So ethics by design is an important part of the process that ensures, for example, that the values that I spoke about earlier that are always embedded in technologies are made explicit, that we're aware of the values that our technologies are being designed to promote, and aware of the values that they might promote unexpectedly or without us having intended that to happen. And so the values in design approach, for example, can help us ensure that the values don't remain hidden. And ethics in design can make sure that the right values, the values that are compatible with justice and human well-being, are the ones that the systems are designed to facilitate. But as I've mentioned, we don't actually always have the ability to predict what these systems will do when they're out in the world interacting with us. And we've seen that again and again. So it's not enough to just have ethics be involved at the design process, where the virtues of designers might be the most relevant ones. For example, the ability of a designer to apply the virtue of humility to their choices, to recognize that failure, as undesirable as it is, is always a possibility. And then to think about, What are the modes by which this thing that I'm building could fail? And if it fails, who's it going to hurt and how? And how can I mitigate that, how can I protect people from those failures? So the virtues of humility and courage and wisdom are really, really important for designers. But they're not the only ones. 

We need systems that actually are studying how the technologies are working in the world; what are the impacts they're actually having? So people have proposed various forms of algorithmic auditing or other forms of algorithmic governance that can actually see if the expectations that the designers had, even their best intentions, are in fact mirroring how the technology is actually changing the world and changing us. So we need, we need virtues of accountability and responsibility on the side of those who are deploying the technologies, whether or not they actually designed them. So companies that purchase these technologies have to take responsibility for the effects that they're having. And we need regulators and policymakers to take responsibility for protecting the public interest and making sure that these technologies, broadly speaking, enrich the public interest, both in terms of, you know, economic growth, but also in terms of moral and political and environmental sustainability, of human flourishing. 

So everyone has a role to play. And I don't think that the technomoral virtues are something that only software developers or designers or computer scientists and AI researchers are responsible for developing, I think we all engage with technologies and, in one way or the other, have a role to play in making sure that they're restored to a place that I think all technology belongs, which is as a support for human flourishing, something that enhances our capabilities to live well. Not something that takes over from us, not something that replaces us, but something that makes us better and makes us able to live better with one another. So I still very much believe in the power of technology as one way in which humans have always lived well. But we have to be able to understand the role that we play and the responsibilities that we have for ensuring that that alignment of technology and human flourishing is stable.

Elizabeth Renieris  30:02  
So full circle because we began with talking about habits and things that we do every day, and it feels that expanding that beyond individuals to entities, to society, thinking about maintenance and repair and iterative processes around this are really critical. So thank you so much for such a robust conversation so aligned with what we're trying to do with this podcast. So Dr. Vallor, I really appreciate you joining me in conversation today, and I look forward to continuing the conversation with you in the future.

Shannon Vallor  30:29  
Thank you so much. I've really enjoyed the conversation.

Elizabeth Renieris  30:32  
(voiceover) Tech on Earth is a production of the Notre Dame-IBM Technology Ethics Lab. For more, visit techethicslab.nd.edu.

3. Islam and Postmodern Technology

Guest: Dr. Amana Raquib (Institute of Business Administration Karachi)

Transcript

 

Elizabeth Renieris  0:03  
Welcome to Tech on Earth, a podcast aimed at bringing a practical lens to tech ethics around the world. I'm Elizabeth Renieris, founding director of the Notre Dame-IBM Technology Ethics Lab at the University of Notre Dame. Today, I am joined by Dr. Amana Raquib, a professor at the Institute of Business Administration in Karachi, Pakistan, an expert on the Islamic ethics of technology. Dr. Raquib, thank you so much for being with us today. And for joining me so early in the morning from Karachi. I really appreciate it.

Dr. Amana Raquib  0:34  
Thank you. Thank you for inviting me, it's an honor.

Elizabeth Renieris  0:37  
Great. So I'd like to begin where we typically do on this podcast, which is with your personal journey. Tell us, how did you come to study and publish on Islamic perspectives on technology ethics?

Dr. Amana Raquib  0:49  
Alright. Bismilla Hir Rahmaa Nir Raheem, I will begin with the name of God, who is the Most Beneficent, the Most Merciful. I actually started off my career doing an undergrad in philosophy, I graduated back in 2004. And while I was studying for my philosophy degree, simultaneously, I was enrolled in a program that looked closely at the Islamic Scripture, the texts. So I was doing the two programs simultaneously. And then I used to, you know, come across questions and ideas, which I would then try to sort of situate within the Islamic theology and try to figure out some of the answers. And then also, at the same time, I found out that there are some very pressing contemporary questions or other dilemmas that, you know, the Muslims need to deliberate upon. And while, for instance, I was studying philosophy of science and political theory, I was surprised by the fact that many of the Muslim intellectuals, intelligentsia, scholars, they weren't as aware of that. Because of the, you know, the colonial heritage, the Muslim populace, the more educated people were, were too old with science and technology and were not very cognizant of those deeper questions of an ethical nature. So yeah, so that's when sort of, I might say vaguely, that's when I thought that there was a need to work on that. And again, not just as something that requires a deconstruction because there is a lot of critique already that was being produced within the Western, you know, academic literature in the past few decades. So I thought that there needs to be some constructive work that needs to be done through an Islamic, you know, metaphysical, epistemological, and ethical perspective. So when I started off my doctoral program, I was more bent upon working on science, but somehow, it eventually turned out to be technology.

Elizabeth Renieris  3:17  
Got it. That's really fascinating. I mean, you bring together so many disciplines in your own background. And I think those of us working in this field are recognizing more and more the importance of true interdisciplinarity. Let's dive right into the Islamic perspective. And we can start perhaps from the critical lens, and then turn to the more constructive perspective, as you frame it. So from a critical or more negative frame, I think you've said or you've written that there's a crisis of morality as a result of what you term "postmodern technologies." I was hoping you could help us understand what you mean by this and unpack a bit of what postmodern technologies are and how you view them through an Islamic perspective.

Dr. Amana Raquib  3:58  
What we call modern technology had this built-in optimism, where, you know, there were those higher ideals of, let's say, human progress. There is, you know, meliorism, you know, things that we find in people like Dewey and others. So, there is like a hope and optimism, right, in terms of perhaps some sort of a general human betterment, kind of a utopia. That's like now seriously questioned both on the theoretical and practical grounds, right? So practically, we have seen, you know, some immense destruction in the past century as a result of the technological advancement. But even on a more theoretical level, we are encountering those deeper questions of morality, right, and what should be the goal, the ideal, what sort of a human society do we want to create,  what sort of individuals do we want to have who inhabit those societies? And those questions are difficult to answer because of the perspectives that lack a deeper metaphysical basis, right? In such a situation, we are sort of in a deadlock position, right, where we don't have a strong foundation to provide us with a stable morality to guide our goals and aims towards which we should be designing our technology. 

And in that sort of a moral vacuum, what happens is that technology assumes the higher end--you know, becomes an end in itself. And this is what I call "postmodern," you know--the extreme manifestation of that postmodern condition, right? Where means become ends, right? So you become--I can illustrate that, for instance, I'm talking to you via this technological medium, right? So, what is the intrinsic end here in our conversation is having a meaningful, intellectual, you know, conversation about some pressing, ethical problems that humanity is facing, right? And it could have been through a number of means, right? We could have been talking over phone, we could have met in person, right? It doesn't matter, right, what means we use. But in a way, if means take over, right, it becomes more important the kind of sophisticated means that we are using that can keep us in touch, you know, 24/7, that becomes more important.

Elizabeth Renieris  6:56  
Yeah, let's zoom in on those means versus ends because I know your book is focused on I believe what you term an "objectives" approach, right? Or I don't want to mispronounce this--Maqāṣid? Is that how you pronounce it?

Dr. Amana Raquib  7:08  
Yes. 

Elizabeth Renieris  7:09  
And so, is this what your book is getting at in terms of an objectives approach? Could you elaborate on your view there?

Dr. Amana Raquib  7:15  
Oh, sure. So basically, in trying to address that sort of gap or vacuum that I just mentioned, I have tried to sort of use a traditional, you know, Islamic ethical resource, right? And that is the Maqāṣid, right? And the Maqāṣid al-sharīʿa is primarily, it's a paradigm where the early Muslim theologians, they tried to sort of [excavate] or they sort of derived some basic ethical principles or objectives from the Scriptures, that is from the Quran and the Sunnah of the Prophet Muhammad, alayhi as-salām. So there are like lots of particulars that are mentioned in the Scriptures, but they wanted to derive those, you know, general principles that are actually meant to be preserved and protected. And they called it the Maqāṣid or the objective that the whole religion of Islam, or the Islamic ethics, is meant to safeguard these certain basic fundamentals, right? And there's some disagreement over that. But you know, so generally speaking, they talk of those five universals. And that's for all humanity. That's not just for Muslims. And that's, like, religion, life, intellect, wealth, and honor, right, or your lineage. And they say that no matter what we are asked to do, or what we are prevented from doing, you know, both the do's and the don'ts, that's part of the Islamic ethics is meant to protect these fundamental universals. And that's actually meant to ensure another term that I have used, that my work is based upon, is the maṣlaḥa or the well-being, which means that these objectives are meant to secure the collective well-being of humankind. And that's why it's very important to safeguard those.

Elizabeth Renieris  9:37  
Yeah, so this well-being concept is really interesting, as well. So one of the things that our lab is focused on are values underlying technologies and approaches to technology. And I know you've also talked a lot about values. I want to look at what some of the predominant values are today in the technology conversation and how they compare to core Islamic values. I was asking about this example of efficiency, right? Which is very predominant in the conversation, versus a value like compassion, how you think about the interplay there.

Dr. Amana Raquib  10:08  
So, efficiency is actually the most important value when it comes to technology today, right? Efficiency, and I would also add convenience, for instance. And that goes very well with the instrumental logic that informs the technological paradigm today, right? And also, because when we don't have any foundation to give us any sort of final values to aim to, is what happens is that efficiency becomes the highest value or rather the norm, right? And it becomes enough to just say, you know, as a justification, or as a rationale behind anything, or any technology or technological application, that it saves up on time, it's more efficient, it saves up on labor, so on and so forth. 

Elizabeth Renieris  11:04  
I just want to zoom in a bit, so I want to make this more concrete for listeners, maybe who're not, you know, so deep in this conversation, but let's look at a specific industry or, you know, technology or maybe something like health care. How would these values perhaps play out in a specific context or industry like that? Could you give us some examples perhaps?

Dr. Amana Raquib  11:22  
Yes, because right now, when we were working on artificial ethics, right, and we were looking at, you know, caring bots, for instance, right? In terms of, you know, who are more efficient, who are supposed to be more efficient than, let's say, human carers. But then there's this element of compassion and human conscious compassion that would be lacking in that. And the other thing, looking from the Islamic--and I think we actually share this with other religious traditions like Christianity, and I think almost all other religious traditions--where, when we are caring for other people, let's say sick people, or patients, it's not just meant for their physical recovery, or physical revival; there are lots of other dimensions when we are caring for the patient. Because when we are sort of in such close proximity with a sick person, we really understand the frailty of human life, in and of itself. And then from an ethical perspective, we have this opportunity, this chance of doing good for someone. And then at the same time, when we do good to other people, you know, in terms of taking care of them, we have a chance of raising our own--raising in our own spiritual heights, right? And being able to become closer and closer to the idea of moral excellence, right? So for instance, in Islam, you know, it's a very highly rewarding thing to go and visit the sick people, right? And it's not just enough to have, you know, a Zoom chat with them, right, it's actually to go there to be with them. And in this time of alienation and loneliness that we are suffering, you know, where loneliness has become one of the syndromes of our postmodern age, right, people are just, they don't have anyone, and they are always looking forward to having human, real human company. Right?

Elizabeth Renieris  13:45  
Yeah, and we saw this play out very much in the pandemic, right? Where we had, you know, we were keeping people connected, whether in nursing homes or hospitals, with devices, but perhaps lacking the human connection, the physical touch, other elements, you know, that go beyond what can be facilitated by modern or postmodern technology. So I think that's a really vivid example.

Dr. Amana Raquib  14:04  
Right. Can I add one more thing?

Elizabeth Renieris  14:06  
Please.

Dr. Amana Raquib  14:06  
So if I'm talking about the Islamic perspective, then there's this concept of human nature, which is called fitrah, right? And this, we have this belief that, you know, this fitrah is part of our humanness, right, it's part of our metaphysical being. And that fitrah sort of defines who we are, and it's part of human fitrah to be a sociable being, you want other human--and that's how we are designed by God. And that's why God always places us in families. So he has structured the world accordingly, right? So we are never born in a way that, you know, we don't have those human-social connections, right? And that's why it's appropriate, you know, when defining "wellness" or "well-being" or "human well-being," this concept of fitrah is very important, right? Because otherwise it's very difficult to define what's "wellness," right?

Elizabeth Renieris  15:07  
Yeah.

Dr. Amana Raquib  15:07  
So that's the standard, the gold standard, right?

Elizabeth Renieris  15:10  
Yup. 

Dr. Amana Raquib  15:10  
And it plays out especially when we are talking about things like compassion, the human element, and so on and so forth.

Elizabeth Renieris  15:17  
And it's so interesting that you say that because I think one of the things that comes up for me a lot in the technology ethics conversation is just how individualistic it can be, right? And so we talk about individual well-being and values, but not so much the collective or the social or the interpersonal. And I think what you're describing sounds like both dimensions, that you kind of need the individual and the collective to have a complete view of this. Is that an accurate representation?

Dr. Amana Raquib  15:40  
Yes, yes. And that's the beauty of the Islamic ethical system, actually. Because you are, at the same time you're working on your own personal ethics, morality, spirituality, enhancing your intellect, intellectual abilities, but that's tied to the good of the community, the society, the humankind as a whole. And also, in my book I've talked about it, so also the paradigm that I've used, the objectives paradigm, it talks about, it's very clear in that if an individual good is harming a collective good, it's a principle, it's also a legal maxim in Islam--and again, when I say a legal maxim, it's not just law, it's actually ethical-slash-legal maxim--that, you know, if the individual good is conflicting with the collective good, then the collective good is to be given, is to be prioritized, right? And there's, like, there's a consensus over that.

And also, for instance, when I mentioned those five objectives--and it's pertinent to what you asked earlier regarding efficiency and compassion and those sorts of, you know, which value we should prioritize and what should be the formula to decide that, right? So we have also this classification of necessities/needs and enhancements, right? Again, we have Arabic terms for that, but I won't use that. And that's like, so sometimes, for instance, if there is like, we are talking about human life, the objective of human life, and we are talking about wealth, for instance, right? Now, there's a value in protecting wealth. And you know, you could have more efficient means that could perhaps be more economically efficient, right? But is that a necessity or a need or an enhancement, right, with respect to the preservation of wealth, and, you know, whatever resources. Versus there's something that's at the level of necessity in terms of human life, right? So if we are talking about compassion, and, you know, that's a fundamental human value, right? Because if you lose that, we sort of lose something fundamental about being humans, right? So then we have to see, right, so we can give up efficiency because it would perhaps come under enhancements, versus something that would come under necessity, right? So that's always sort of an ongoing debate, right, we have the framework and then, you know, when we talk about these emerging issues or cases or whatever, we sort of apply that.

Elizabeth Renieris  18:37  
Yeah, no, that's a great point. So based on what we've talked about, what I'm hearing is that you're saying, you know, technology has sort of become the means and the ends, you know, it's an end in itself, but it doesn't have these values embedded in it. And this value of efficiency or convenience, these values are overtaking more important, perhaps more human, values that are coming from religion, culture, you know, other traditions. So I'm curious from your perspective, given that you are focused on a constructive approach to technology ethics, what if anything can be done about this given how concerning this phenomenon is?

Dr. Amana Raquib  19:15  
Yes, so I would say that there's a lot that can be done potentially, but only if there are well-intending, well-meaning people. So it's not something that's really out of hand, I viewed this in my book as, you know that strong thesis of technological determinism? I don't agree with that. I think there's like a two-way traffic, where there's, like, technology shaping human beings, but human beings, you know, society shaping technology, right? So it's going hand-in-hand. Technology, it does shape us in many irreversible ways, but only because we allow it to, either deliberately--so it's primarily when we design technology. Prior to my work--and thankfully, now I see a lot of literature coming up, you know, that's focusing on design. Because when I was back in, you know, 2010-11, when I started off with my work, there was a lot of emphasis on, you know, the responsibility on people who consume technology, they should be responsible uses and so on, but not much on the designers end, right? So again, it has to start there. 

Even that, I would say, that's not enough. Because, again, if we look at [it] holistically, right, technology is part of our societal, social dynamics, right? Even saying that we can somehow fix this problem with technology, in isolation, I don't think this would work and this has been working. Precisely because, where does technology come from, right? There are human beings who are designing it. And those human beings--whether they are very aware, cognizant, or not--they have some value framework that is informing their decisions. So I'll give you an example, right? So for instance, when, let's say Steve Jobs, right? He had a vision, right? So this, the vision is either a personal vision, or this vision is informed by some sort of a meta narrative about, you know, what humans should be doing, right? So in that sense, he envisioned everybody, you know, having a smartphone, right? And he translated his vision into reality; now this is what is happening all around us, right? But the question is that vision steers technology, but the vision itself is not technological, right? There's some informing higher paradigm, right? It's culturally informed, and that's where I believe this postmodern idea comes in, right? Because having lost faith, right, in anything that's, like, really meaningful, ethically driven, it becomes more of sort of innovation for the sake of innovation, right? You just try, and you just sort of put things up there and see how they proceed. That's sort of the logic behind this constant, you know, incessant inventions. So I would say--I'm sorry, I lost track. So what was your original question?

Elizabeth Renieris  22:37  
I think you've answered it, which is, what can be done? And it sounds like--I've heard a number of answers from you. You know, one is obviously to zoom out to look beyond the technology, right, to contextualize this more and to look at the human values embedded. You've talked about design--I mean, I would like to give you the opportunity, our lab is very focused on applied technology ethics, so do you have maybe one or two very practical, tangible takeaways for, let's say, global companies who really are concerned about this? And do want to think about translating some of your work into practice?

Dr. Amana Raquib  23:11  
Sure, I would be happy to do that. But again, I wouldn't say that, you know, if somebody asked me, you know, We are designing this, you know, this one technology, can you come and help us with that one technology? I don't think it would work this way, right? Because one technology, again, is part of a greater nexus, you know, a web of technologies. And Hans Jonas actually gives a very, you know, pertinent example of, let's say, steam engines, you know, when they started off, back in the 19th century. So it's basically, you know, it's a whole gamut, right? And then, you know, you started digging out coal, right, and you design the steam engine so that you could, you know, dig it out of the mines and take it out. But then the steam engines themselves needed more coal. So you had to dig up more, you know? You see this sort of imperative that starts happening.

Elizabeth Renieris  24:06  
Absolutely.

Dr. Amana Raquib  24:06  
So it's not just when technology--and that's what my work is meant to emphasize, that it's a collectivity of those technologies and also behind the technology. So if we are designing, again, the same question, what sort of human individuals we want to design, what sort of families we want to design, what sort of communities and societies we are looking forward to, right? So those are some fundamental questions. And actually, this has come up in our own research; think of the whole educational paradigm. So we started off with this idea that the engineers, the design engineers, they are not equipped to actually tackle such questions, right? So when they design, they are more focused on you know, the productivity, the efficiency side, and they think themselves to be just problem solvers, you know? They give solutions for those technical problems, right, of how to make something more efficient, you know, remove the errors and so on, right? And as to the question of, not just the question of, How we should design it in the best possible way, for instance, to sort of align with the ethical values, but some very basic key questions of whether there is a need to design it in the first place, right? Those critical questions that need to be asked, and which I think are are not being asked presently. And then, of course, also the consequences that can be enumerated. Because I've emphasized this point in my book, as well, that Islamic ethics is not purely consequentialist. But of course, it does take into consideration consequences. And we know that, you know, what the past few decades have shown us are some very grave consequences, right? So again, there has to be the right intention, there has to be the right goal or objective, then the right sort of design or methodology, and then the right consequences.

Elizabeth Renieris  26:10  
And it sounds like the lack of intention is an intention in some instances, right? (laughs)

Dr. Amana Raquib  26:17  
Postmodern nihilism [inaudible], right? That you don't [have] intention of doing anything. And then anything can happen. (laughs)

Elizabeth Renieris  26:26  
Yeah, exactly. So just to wrap up here, I guess one final question to take us out on an optimistic note is, what makes you hopeful about the future of technology ethics?

Dr. Amana Raquib  26:37  
The disaster that we are experiencing right now. (laughs) I feel that unless there's something really serious to make more and more people question it and not have this very naive optimism regarding, you know, technological progress or technology for the sake of technology, right? So when people see those, this brutal, you know, killings, the loss of human compassion, you know, the question of, you know, how we can differentiate reality from VR and those sorts of things, right, then I think we will gather the momentum where these same people will become serious about these things. And the worse it gets, I think, the hope for better. (laughs)

Elizabeth Renieris  27:33  
I agree with you. Dr. Raquib, thank you so much again for joining me today, and I really look forward to continuing this conversation with you.

(voiceover) Tech on Earth is a production of the Notre Dame-IBM Technology Ethics Lab. For more, visit techethicslab.nd.edu.

2. A RenAIssance: The Rome Call for AI Ethics

Guest: Father Paolo Benanti (Pontifical Gregorian University)

Transcript

 

Elizabeth Renieris  0:03  
Welcome to Tech on Earth, a podcast aimed at bringing a practical lens to tech ethics around the world. I'm Elizabeth Renieris, founding director of the Notre Dame-IBM Technology Ethics Lab at the University of Notre Dame. Today, I am so pleased to be joined by Father Paolo Benanti, a Franciscan monk and professor of moral theology, bioethics, and neuroethics at the Pontifical Academy for Life in Rome, Italy. Father Benanti, thank you so much for being with us today.

Paolo Benanti  0:30  
Thank you for having me. It's really a pleasure and an honor to be here.

Elizabeth Renieris  0:35  
Great. So I'd like to begin this conversation, as I often do on this podcast, with your journey to this conversation. So tell us, how did you come to study technology ethics and what drew your interest to this topic in the first place?

Paolo Benanti  0:49  
Well, before I was a monk, I was a student at the university in mechanical engineering. After my college degree, I would like to understand the world, and, well, engineer[ing] looks like a subject that can help us to understand the world. But, to be really honest, probably I have to follow the suggestion of my philosophy professor that told me that I was much more useful with philosophy, more than engineer. But making short a long story, after three years, four years of mechanical engineer[ing], I find myself like if something was missing in my life. I have a good scheme to understand, to simply approach the world, but still something was missing. Well, that was the point in which I started to find, to look for something that was more broader than not simple a job. And at the end, I joined the order, the Franciscan order. 

As a Franciscan, I was invited to continue my study, and then I do theology and philosophy. And I make a model of theology and ethics. And at the end of STL, I apply for a Ph.D. And for the Ph.D., I would simply like to make a bridge, a connection between what I was before and what I am now. And this is the reason why I started to have as a topic technology and ethics of technology. The subject of my Ph.D. dissertation was the cyborg, the junction between the human body and technology, in the perspective of the human enhancement; that could be cognitive or simple muscular or something like that. Part of my Ph.D. dissertation was made at Georgetown University because I get a grant from Georgetown. And so I went to the States to make this dissertation, and I was back to Rome, and I defended at Pontifical University Gregoriana. And this was my entry point to ethics of technology. So here I am, trying to decode AI as a way to show a displacement of power inside society.

Elizabeth Renieris  3:09  
It's a really fascinating journey. I want to contextualize this in the context of a book you wrote, where you describe something you called the techno-human condition. Can you tell us what you mean by that, and how it relates to the ethics of AI and other technologies?

Paolo Benanti  3:24  
Well, technology is one of the face[s] of being a human. We know that human beings lived in some place on the earth because we found trace of technology. One of the oldest technologies that we was able to trace was the place where we have the burying of human beings, where the casket and the remains of human beings are displayed. The other animals simply don't do that, they don't have cemetery. We leave [a] trace of our past like in technology like that. Or when you have painting in the cave, or when we have some kind of tools made with stones. So, technology is something that it's always been with human beings. But what is technology? You know, we have philosopher[s] that simply say that because we are lacking of ability, we are in the needing to have some kind of technology. We cannot fly like a bird, we cannot run like a leopard, we cannot jump like a frog. And so, we develop car, airplane, and things like that. Well, this vision, it's really controversial. In which sense? That if we lack something, in which way we can develop something more of what we lack. 

And from that, I start an inquiry in what does it mean to be human and what is the role of technology? Well, bid back to really common technology: pencil and a notebook. It's something that is for our memory, of course. But if we look at an elephant, the elephant is really well known to have a huge memory. One of the funniest experiences that I did as a Franciscan was going to the island of Sri Lanka, where our friars have more than one house. And during the journey, I was to the elephant orphanage, the elephant that remain alone when they are kids during the civil war. So because the elephant is so important animal in Sri Lankan community, they keep this orphanage. I give milk to a young elephant; it was really young, but was big like a little car, like little European car, and he was able to finish three gallons of milk in few seconds. It's an impressive experience, I can swear on that. But what happened is the elephant will never forget what I did for him. Sri Lanka is full of story of this huge elephant that at one point are like crazy running between people, just to give a kiss with the huge nose to people that make something good to them.

We are not like the elephant. We can forget. If I don't take notes of what I have to do, or which kind of stuff I have to buy for my monastery, my brothers probably do not eat because I forget. Well, so my human condition is different from the elephant condition. An elephant has in his biology all the things that he need to live as an elephant. My condition is spiritual, and my spiritual condition is more than my biological condition. So if we look at technology, we are looking [at] an exceedancy of the human being's own biology. When we see an artificial intelligence, when we see alpha zero that is able to defeat someone else in a game, we are looking at the prodigy of a human being's that is not enough to himself, that is an exceedancy of his own nature. This is the techno-human condition. That is impressive that the machine that we are building now is an expression of this kind of condition. So it's not a threat to the human being, but it's a huge possibility to better understand ourselves and reality.

Elizabeth Renieris  7:28  
Yeah, that makes sense. And on the spiritual condition, you know, obviously, you're a Franciscan, you're part of the Pontifical Academy at the Vatican, how does your religious background and training inform your view on technology, ethics? What are maybe some of the key themes or values that come up for you?

Paolo Benanti  7:43  
I would like to say in a multiple way. As a Christian Catholic, believing that we are creature[s] made by God, that mean[s] that everything that we are is in the desire of God. So our reason, our ability to understand, our ability to project and do things are not just an accident; it's something that are given us to take care and to allow the land, the promised land, to [become] fruitful. So it's something that we can use to produce much more wellness for everyone. As a Franciscan, you know, Franciscan was always connected with the city and with the counter, with the Middle Age. One Franciscan was the one that simply invented the way to make the accountant with two columns, red and black. So actually, let me confess that. If we call Black Friday, Black Friday, it's because it's the first day in the year in which the fiscal earning of a store moved from red, passive, to black, active. So, last Black Friday was actually Franciscan connected. So we w[ere] really present as Franciscans on what happened inside society. 

And so, this positive view of the ability of technology, this idea of being present as a Franciscan inside the society, plus one more thing. I'm a Franciscan of T-O-R, third order regular. Well, one of my grand grandfathers, one of my friars in the Middle Age, was Ramon Lullius. Ramon Lullius was the inventor of the logic that was behind Leibniz's logic, that is behind the computer, that is behind artificial intelligence. So, more than one direction if you would like to find connection. Of course, this is not behind my decision, as I told you before, of studying this stuff. But I feel myself really in the right place looking at the relative that I have in my order.

Elizabeth Renieris  9:56  
Yeah, that's great. And building on the sort of religious and theological foundations, approximately two years ago, you were part of the signing of a document known as the Rome Call, signed by the Vatican and other signatories in February of 2020 in Rome. Can you tell us about what the Rome Call is? How it came about? How it relates to technology ethics?

Paolo Benanti  10:19  
Actually, AI technologies are technologies that could simply surrogate the human beings. And we could have some machine that has the agency to take action without the human beings, and that produces a lot of issues that I'm sure you will drive through during this podcast series. So the ethics of AI is simply the idea that we would like to give a sort of guardrail, ethical guardrail, to keep this machine inside the street that we would like to produce. Well, there is this idea to build up a square where different voices can find together to try to define some principle to develop AI. This is what [is] behind the Rome Call.

So basically, if you go in Silicon Valley, you can hear from a lot of programmers that being in Silicon Valley today, it's like to be in Florence during the Renaissance, there is a lot of idea of something new that they are producing. Try to write "Renaissance" with capital AI in the middle. So if the Renaissance was the time when we discovered again the centrality of the human beings, RenAIssance in AI means to start to develop an AI system that is human-centered, that has the human beings as the core and as the ends. So it's not an evolutionary fight between Homo sapiens versus Machina sapiens, but could be the start of a new alliance in which Homo plus Machina can produce a new way to discover drugs to treat sickness, to have a better, just society, and so on. [The] Rome Call tried to be the blueprint of this new RenAIssance. And what was really surprising was that companies--really big companies, like IBM, like Microsoft-- international organizations--like FAO, [the] Food and Agriculture Organization of the United Nations--that religions--[you] find them together to say human beings have to be the center. And it's much more impressive that what we sign with IBM, Microsoft, FAO, Italian government, [in] the presence of the president of the European Parliament in 2020 in Rome, now these things would like to be signed by two new religions, Hebrews and Muslims, in Abu Dhabi in May 2022.

So the idea of having a shared position, looking at the future, looking at the innovation presented by artificial intelligence, is something that was really impressive, you know? Probably we cannot find an agreement if we discuss about politics, if we discuss about society, if we discuss about tax, and things like that. But looking at the present and the future of the younger, of the most fragile people, we found all together on the same side of the pond. We would like to see a better future for the next generation. So [the] Rome Call for AI Ethics is simply a series of principles that, voluntarily, some governments and institutions and tech companies would like to put inside their own product.

Elizabeth Renieris  13:58  
Thank you. It's a very comprehensive overview. I noticed a really interesting term. I believe it's used in the Rome Call. But also I've seen in your work and research, the term I believe is "algorethics." What is algorethics? How is this different from AI ethics or tech ethics more generally? Why did you use that framing?

Paolo Benanti  14:18  
Well, you know, ethics is a really core part of the human action because we are not instinct-driven, we have a base of freedom. This morning, every one of us can choose--forgive me my Italian bias--between a cappuccino or a cafe or, if you like, a flat white (laughs), this new, not really Italian innovation with milk. Well--forgive my joke--well, this is connected to the freedom that every one of us has. Well, machines [are]  deterministic things, and ethics is of human beings. 

But now, if machine has an agency, if machine has some degrees of freedom, if a machine can give you or deny to you to borrow money from a bank, if a machine can give or deny you some kind of constitutional right--like in the trial in the tribunal--well, this machine has not only to execute a code; it's also to understand human-produced ethical value. But these ethical values, this moral law, now has to be computable in an algorithmical way as to be--if you allow me to express it in this way--understandable by the machine. Algorethics is this new chapter of this old journey of the human beings on the earth that is traced in ethics. Every time that we face some kind of dilemma, we produce a new chapter of ethics. Well, algorethics is a new chapter of computer dilemma[s]. But now, this chapter is produced by human beings, but now is executable by a machine, is written also in an algorithmical way so the machine could stick to the ethical directive. And to do that we produce this new word, that is algorethics, that actually became really famous not because I use for first but because also Pope Francis used in one of his speech[es]. And that made me lucky in some way because I was connected to a word that the pope liked to describe what we could understand as a mission from engineering.

Elizabeth Renieris  16:53  
Yeah, it's a really interesting evolution, as you say, a sort of new chapter, a new phase of implementing and operationalizing AI ethics. To get more practical on that, I was hoping you could maybe give us one or two examples from, I don't know, a specific industry or sector or a specific technology about how you might apply this idea of algorethics in practice.

Paolo Benanti  17:13  
Okay, let's start from the most simple one. Okay, the most simple one, we can imagine an autonomous car, okay? If I jump in the car, or you jump in the car, we are also emotional beings. So the way in which the car drives could be really boring for me that I'm used to the really chaotic Roman traffic, but could be really horrific to you. (laughs) So, because the human beings is the value part of the autonomous car, the way in which the car drives has not only [to] be safe, has not only to be efficient, has also to be tailored for the human side, for the emotion of the human person. That could be a really low level of algorethics.

Well, we could also imagine some more complex, for example, level of algorethics. AI works on data, and data sets are the collection that can give a trustable or not-trustable AI prediction. Well, we can simply have [inaudible] sentences, yes or no, given by the machine. Or we could produce a sort of explainable sentence that try to justify something. So imagine, imagine that I go to a bank, and I ask to borrow money, 5,000 bucks. Well, I could have an irregular sentence, yes or no, or I could have, yes and it's good, or no, and the machine has to produce a sort of causative justification of no. So if the justification is simply in data and the machine produces a table with 5,000 data sets, this is useless for me, probably I cannot understand. But if the machine says, No, but if you have 2,000 in your bank account, that could be yes. That gives me a parameter in which I can say or I can see if there is a bias or an error. So for example, I have 2,500, there is an error in the data. I go to the director and say, Please check the data again. Or suppose that the machine says, May I borrow 5,000 bucks? No, but if your zip code was--well, that's a bias. And I can say, Look, there is a bias inside the data. So the second level of algorethics, it simply makes transparent for the human beings when an automatic decision is made on him, if there are or if could be some kind of bias that make the decision on fate.

Then let's go to a third level of algorethics. Supposing that we applied that to a medical device. In this medical device, we could have two things; we could have some kind of operational boundaries that are given by the algorithmics. So suppose that we have an insulin machine that gives you the insulin, depending on some kind of vitals that the sensor reads on your body. Well, should the machine operate 100 percent of time on itself? We can imagine to give inside the algorithm a sort of statistical library that gives a percentage to the machine of confidence on the kind of data that machine is work[ing]. When the percentage of the confidence drops below a threshold, the machine connects [to] a real doctor. In this form, algorethics becomes something that gives to the machine a sort of sense of uncertainty. And with uncertainty, when the machine is not statistical enough sure what it's doing, human beings come in action, and so on.

So we can have a lot of different situations in which we can apply or develop a [inaudible] algorethics. But I give you three examples, in which in one was the human beings the ethical part. In another one, it's the judgment of human beings that is the problem and the criteria of judgment. In the third example, it's algorithmic ethics in the sense that, give to the machine the ability to ponder it, to have a measurement of confidence on what the algorithmic data are indicating to the machine. And why this? The answer, it's really old, you know? Because Socrates, one of the father[s] of ethics, say[s], "I know that I don't know." Well, if you have something to Google, Google will never tell you "I don't know." Always there is an answer for the algorithms. And so the machine will always give an answer. The problem is the quality of answer. Algroethics is something that bridge[s] numerical values with ethical values.

Elizabeth Renieris  22:49  
And it's really interesting, as well, very vivid examples--for me what that conjures up is also this difference between information and moving from an information age to wisdom, and the ability to interject more human wisdom into computational processes. On that note, and just in the vein of starting to wrap up here, although we could talk for days, I want to just turn to the role of universities and educational institutions since we're both situated in the academy, to think about how we can prepare students to become the future stewards of these technologies and to have this ethical, human-centered perspective embodied in algorethics in the Rome Call and the work that you've been doing. What are your thoughts there?

Paolo Benanti  23:29  
Well, first of all, education is one of the core points of the Rome Call for AI Ethics. That means asking ourselves what the education can do. It's something that put ourselves in a line in which the different subjects that sign the Rome Call for AI Ethics find themselves. We need a new generation of people that is able to have a critical thinking on stuff. So the first meanings is to produce ethical, critical thinking. Well, an algorithm is a linear process, in which given an input, I got an output. [An] Ethical process is never so linear, it's always a circular process, for a lot of reasons. You know, when I try to answer to the question, What I should do? The first line is "I." With my story, with my identity, and the last line, it's again, "I" because what I should do is something that I could do; I cannot do what is impossible for me. Well, because in the machine, there is no I, we have to think before what the education can do. So let me try with my geek side, you know, if you put some code, open code in GitHub, you probably have also a folder that is the folder with the computation. One of the problem with open software is that someone that is developing, for example, AI code for image recognition really doesn't want that people use that kind of software, for example, to pilot a drone on an attacking war. So my dream is that we have also an ethical folder in which ethical documentation arise with the production of software. So what the education can do: open up a culture, the ethical culture, open up the ability to have a critical thinking on which kind of stakeholder values, principles, and virtues are touched by the piece of innovation that we are producing. And having also the ability to make transparent the ethical constraints, the ethical formal constraints, of the software innovation that someone is producing. So it's not a direct impact on the quality of the product, but is the ability to produce to the creation of a new culture that can give much more values, ethical values, to the innovation that is produced by companies and by engineers.

Elizabeth Renieris  26:13  
Well, Father Benanti, I too look forward to a folder for ethical documentation. I think that's a brilliant idea. I want to thank you so much for joining me today and taking the time, and we look forward to continuing the conversation with you in the future.

Paolo Benanti  26:25  
Thank you very much.

Elizabeth Renieris  26:28  
(voiceover) Tech on Earth is a production of the Notre Dame-IBM Technology Ethics Lab. For more visit techethicslab.nd.edu

1. A Buddhist Lens on Tech Ethics

Guest: Venerable Tenzin Priyadarshi (MIT)

Transcript

 

Elizabeth Renieris  0:03  
Welcome to Tech on Earth, a podcast aimed at bringing a practical lens to tech ethics around the world. I'm Elizabeth Renieris, founding director of the Notre Dame-IBM Technology Ethics Lab at the University of Notre Dame. Today, I am so pleased to be joined by the Venerable Tenzin Priyadarshi, president and CEO of The Dalai Lama Center for Ethics and Transformative Values at MIT. So I'd like to begin with your journey to this conversation. Could you tell us how a Buddhist monk ends up running an ethics lab at MIT?

Venerable Tenzin Priyadarshi  0:36  
Well, thank you, it's a delight to be here. I think, you know, one of the sort of aspirations of a Buddhist monk is, of course, [to] contribute to both the conversations and doing of [a] better world, a world where people are more kind, empathetic, and compassionate. And a world where people are more concerned about aspects of ethical framing and justice. And so MIT was in many ways a natural home for such a platform and such a conversation.

Elizabeth Renieris  1:15  
So the title of your lab references what you call "transformative values." I'm interested, what are transformative values, and how might they relate to a conversation about AI or technology ethics?

Venerable Tenzin Priyadarshi  1:28  
I think one of the challenges with sort of traditional understanding of ethics is that it has largely been theoretical. It has also largely been, at least in academia and certain settings, more of, you know, the history of ethics or the sociology of ethics, the philosophy of ethics, and things of that nature. So one of the things that we aspired to do with the center was to make the conversation more relevant to the ideas of leadership, to the ideas of innovation, and so on. And so transformative values was simply a framework to invite individuals to actually think of, you know, both ethical imagination and values-framing in practical terms that primes their day-to-day decision-making and their outlook of the world.

Elizabeth Renieris  2:25  
Sounds like there are a lot of synergies between our labs and our missions. As you know, our lab is also very much focused on bringing a practical and applied lens to things. For purposes of this conversation for those who aren't familiar, I was hoping you might also give us a brief introduction--appreciating this is a large body of work--but into Buddhist ethics, and how in your mind, they relate to AI or technology ethics more broadly. And of course, we'll dive into more specific examples in a few minutes. 

Venerable Tenzin Priyadarshi  2:54  
I think what the Buddhist ethics does is simply sort of set aspirations and goals, both for individuals and for communities at large, in terms of how to create a fabric, a social fabric, that allows for flourishing and well-being of majority of its members. So it's not just the idea that, Oh, the world should be or ought to be in a certain way. But it also begs the question of what should individuals be doing in that role? And how should individuals contribute to do such a setting? The other thing is, in the framing of Buddhist ethics, is that it's not just a normative approach. It's a very didactic, reflective approach to things in terms of its process. So the idea is not that, you know, let's just abide by certain rules and regulations that [are] created by a certain group of people, but sort of an ongoing, healthy conversation about what ethical imagination is--again, you know, reminding us of the fact of the complexity of the world that we live in, that not everything that is legal may be ethical.

Elizabeth Renieris  4:12  
Well said. Could you maybe break it down a little further and give us one or two practical examples of a key value in Buddhist ethics that's, in your mind, is relevant in this conversation?

Venerable Tenzin Priyadarshi  4:23  
I think, you know, one of the things is, you know, with regards to, for example, technology, you know, one of the challenges is that, to be cautious of the narratives and the storytelling. So Buddhist discipline is very emphatic on the idea of checking your biases. It's very emphatic about the idea of how not to get into false storytelling or false narratives, no matter how attractive it may seem. And one of the things that we are encountering in the current landscape of technology is virtue signaling. We see a lot of virtue signaling happening from all walks of tech companies and so on. And so, you know, just the reflective mechanism that sort of demands a kind of radical honesty on part of individuals and the companies to say, You know, are you really true to the narrative, or are you simply weaving a narrative for public perception as opposed to the actual content of things?

You know, the second thing could be around the idea of, how do we sort of frame ethical aspirations in sort of innovation? So for example, you know, historically, when we look at ethics, it has often been a conversation around restraint: Do not do this, don't do that, and so on. Or if you do this, you'll get punished. But when you're looking at sort of partly in terms of how we frame Buddhist ethics, it has to do with a process of, you know, if we were to follow certain kinds of guidelines, how could we nurture sort of well-being for the maximum amount of people participating in this process?

Elizabeth Renieris  6:10  
Yeah, that makes sense. Thank you. And well-being is really interesting, I think particularly at the time that we're having this conversation, right? We're on the heels of more than two years of a near-virtual experience for many of us. We're on the cusp potentially of another world some are referring to as the metaverse. From a Buddhist standpoint, you know, what are for you some of the key lessons or values that are featuring most prominently in the role that technology is playing in our lives at this time, including from a well-being standpoint?

Venerable Tenzin Priyadarshi  6:38  
I think it again, you know, sort of begs the question of both the short- and long-term implications of technologies that we use. We haven't quite fathomed the behavioral shifts that happen in individuals, even with the usage of something that has almost become second nature to us in terms of our response to cell phones and cellular technology. That how it has contributed to a sense of impatience, to a sense of urgency, to a sense of irritation, and mild irritations that have cumulative effect over a period of time. How it has, you know, kind of challenged our well-being by virtue of just sleep, you know, or lack thereof. I don't think, historically, we have ever been such a sleep-deprived society.

You know, so those are, you know, some of the questions that sort of highlight the emotional well-being of things, meaning that, you know, despite of N number of means of bringing the world together virtually, social isolation and loneliness is becoming the new epidemic. So part of the Buddhist framing of things is, the paradoxes are right in front of your eyes, look at it, and see if there's a resolution to that rather than sort of taking sides with a particular set of narratives.

Elizabeth Renieris  8:06  
Great, thank you. I do want to turn to the more practical and applied conversation, as we noted earlier. So let's examine how a Buddhist perspective might inform different technologies and practice. I know you've been a big proponent or advocate of something you call ethics by design. So maybe we start with what that means to you, and whether you might give us a practical example.

Venerable Tenzin Priyadarshi  8:28  
Sure, I think, you know, one of the sort of challenges--and I would say, a rather sort of expensive thing for most companies--was that, you know, when they would design products, they would simply design products with particular sort of data sets or efficiency quotient in mind. And then they would, you know, traditionally what they would do is they would run it through their compliance team, just to make sure that, you know, there were no legal loopholes, then they will deploy the project. And then, you know, if something went wrong, they will try to fix it or try to find legal loopholes. And that has been the method with which we have been operating in the name of efficiency, in the name of scaling up certain kinds of technology, and so on.

But when we talk about, you know, certain kinds of technology, one of the key issues is that if we ramp up things so fast, the negative cost of it on our society is perhaps so expansive that it's difficult to get back, it's difficult to ramp back, meaning you cannot really undo certain kinds of deployments. And so part of my push was that why can't we have the conversation around ethical framing at the design stage? Meaning rather than just having engineers in the room or marketing psychologists in the room, why not also have certain kinds of individuals who can at least inform us creatively as to what could go wrong? And not only what could go wrong in legal sense of the things, but what could go wrong in terms of the social well-being, in terms of disruption of certain kinds of desirable behaviors, and so on. And then at least it opens up the room for us to design better because we know and we are able to frame certain kinds of problems very early on, rather than waiting until stage one of deployment or stage two of deployment. 

Elizabeth Renieris  10:26  
Right. So moving from a sort of post-hoc compliance mindset to a more proactive ethical framework. You know, as a lawyer, I appreciate that that's a really different orientation, and I think a very important one to note in this conversation. I would also like to examine how this works in specific technologies or industries. So for example, you know, I'm familiar with some of your work around the automotive industry, or self-driving cars or trucks. How would you apply the sort of Buddhist ethical framing to that type of technology?

Venerable Tenzin Priyadarshi  10:56  
So I think, you know, one of the examples that I often give is--again, in terms of the distinction between legal and cultural framing of things--you know, one common example is around how car algorithms can respond to, you know, certain unusual scenarios on the street. So for example, if you were running a scenario in the US, and you said, you know, there's a car going at a certain velocity, and it needs to swerve left or right in order to save, say, five lives. Because many of us do sort of lean towards those kinds of utilitarian calculations when making such decisions. And let's say, on the right side, you have a guy on a motorbike with a helmet on. On the left side, you have a guy without a helmet on. Which way should the car swerve, you see? And in the US, most people would suggest that the car should swerve to the right because the guy on the right is wearing a helmet, has extra protection, so in case the car hits the guy, at least the guy is protected. But in most sort of Asian contexts, it raises--you know, people would say, No, the car should swerve to the left, because the guy wasn't following the rule to begin with. So why should we penalize the guy who was actually following the rule?

So you see it poses certain kinds of challenges, both in terms of, you know, the cultural norms of what people expect, how we should sort of program certain algorithms to make certain kinds of decisions versus not. And again, you know, we come into sort of similar kinds of scenarios, as you probably are aware. For example, in Germany, when the idea was given that the car should, you know, hit somebody who's older in age, for example, because they have lived much of their life as opposed to a pregnant woman, or as opposed to a young kid. And it already sort of shows certain kinds of biases towards ageism and things of that nature. So when you sort of insert the Buddhist lens, it still says that, you know, you need to sort of respect life for the potential that it has, rather than what the historicity of that life is, rather than what the person has actually done.

Elizabeth Renieris  13:12  
Yeah, thank you, that's a really vivid example. And for me, it raises the question of whether that could ever be a rule-based or computational decision baked into code, right? It feels like there's a real qualitative, contextual piece to this that perhaps can't be addressed so easily in something like code.

Venerable Tenzin Priyadarshi  13:33  
Yes, yes. And, you know, ultimately, it does sort of raise the issue of moral agency, right? Because, you know, most commoners when they're thinking about, you know, self-driving cars, they're actually thinking of self-driving cars--they're thinking that they would not be responsible because they are not driving, they're just being in the car. But the issue still becomes, you know, is that, who was ultimately responsible in case of a mishap? And that's--you know, we generally look at moral agency in cases of mishap. And so, you know, should it be the car company, should it be the algorithm writers, should it be the software, should it be the sensors? Who do you hold responsible at the end of the day?

Elizabeth Renieris  14:15  
Yeah, that's a great point, as well. I want to look at another example, this time from healthcare. So, as you probably saw and noted, as many of us did, during the pandemic, we had an example out of Stanford where there was a question about an algorithm prioritizing vaccinations among certain health care workers. Increasingly, AI and other tools are being used to assign or allocate care--you know, things like prioritizing vaccinations but other decisions, as well. Again, how would you bring a Buddhist lens to the question and that conversation?

Venerable Tenzin Priyadarshi  14:47  
One of the things that the AI community has perhaps talked and over-talked is the set of data on which the current algorithms are based on. And there is a wide acceptance that many of these data sets are corrupt. Many of these data sets are biased, whether it is for healthcare or criminal justice system or whatnot. So I think first thing is an aspirational mode, where we at least try to sort of quality control the data sets on which we are building such things. The second thing is sort of, you know, actually thinking about a non-biased sense of approach to care. You know, we are increasingly entering into a territory where we are not only talking about gendered notions of care or non-gendered notions of care, or who is sort of actually interfacing with the patients and so on. So one of the things with the AI sort of intervention is actually to see whether we are able to sort of create something that truly fosters this unbiased idea of providing care, really sort of being agnostic in terms of who's in front of you, in certain ways. But at the same time being able to sort of recognize the context of the individual in making certain kinds of recommendations. But the second stage of challenge still remains whether our physicians and health care workers are trained enough to follow those recommendations.

Elizabeth Renieris  16:23  
Right. That's always an important consideration, as well. I've got one more case study or example for you, and then I do want to turn back to some of the more philosophical questions. So a topic that's coming up a lot now is in the defense industry around things like autonomous weapons or killer robots, as they're sometimes termed. What is the Buddhist perspective on those developments?

Venerable Tenzin Priyadarshi  16:45  
The whole idea of a killer robot is an oxymoron from Buddhist perspective, in the sense that, why would you actually design something that only adds to efficiency and creates kind of, some kind of challenge around moral agency, especially when taking lives. So one of the things that we have to recognize that, any time we discussed going into war, or stopping war, one of the major factors, besides budgetary constraints and things of that nature, was lives lost. Even a country like the United States, when it would go into war, one of the major sort of data set-looking was the number of lives lost for US soldiers, and so on. And so, again, it becomes a matter of convenience when we transfer that kind of agency into killer robots. And you forget about the number of lives lost on the other side, the civilians and so on, because now, this data set is no longer relevant to us because it's just robots. So that's one of the key things I think that we need to sort of be mindful of, that we may make the arguments that it would serve as a deterrent for many to engage in these things. But from a societal perspective, it may actually create more readiness or more willingness for us to engage in war, especially with countries that do not have such means. And then, of course, you know, the entire concern and fear of hacking those robots and who uses for what purposes--that's sort of another ballgame.

Elizabeth Renieris  18:33  
Thanks for taking on the case studies with me. I think, you know, given that we're both situated at universities that are going to produce a lot of the future leaders who will be making some of the decisions around these technologies, I want to get your view on education. So particularly the skills and disciplines that we should emphasize in preparing students to become ethical leaders as they go on to shape these tools and technologies. What's your perspective on what's needed there?

Venerable Tenzin Priyadarshi  18:58  
I think we need to sort of seriously consider the role of ethical learning in university settings, and not just university but even sort of, you know, in high school and other forms of tertiary education. I mean, that was one of the aspirations with which the Center for Ethics was founded at MIT. Because if you look at sort of a trajectory of education, unless and until you were a philosophy major, a declared philosophy major, you never actually took an ethics course, unless you went to sort of some of the religious universities, where they might give you a little bit of education on traditional sort of religious role of ethics and so on. So I think it's, you know, one of the major challenges is that we provide all these different kinds of quote-unquote "relevant skill sets" to individuals, but we don't actually provide them with the skill sets to engage in ethical decision-making. We don't actually provide them with the skill sets to actually factor in aspects of kindness, honesty, truthfulness, transparency in how they look at the world and how they make decisions. And we have sort of encountered that on a variety of levels--you know, the financial meltdown, most recently in 2008-2009. And the issue with ethics boards and certain kinds of tech companies, where the engineering mindset wants a blueprint; they're not sort of keen on looking at ethical explorations of how the world functions.

Elizabeth Renieris  20:29  
Yeah, it's interesting, it's almost--we have a lot of emphasis on the ethics of AI, or the ethics of technologies, or ethics by design in technologies, but what you're talking about is really the ethics of the individual, the ethical decision-making of the individual. Seems to be non-technology-related qualities that then inform and shape the way the technologies ultimately develop.

Venerable Tenzin Priyadarshi  20:51  
Yeah, I mean, you know, I'll tell you an incident like, you know, without naming names. But like, two years or three years ago, I was a fellow at Stanford, and one of the things I was trying to do is talk with a bunch of tech leaders around the area. And, you know, I raised a simple question in the room that, What sort of technology or product design are you working on today that you are completely comfortable with your kids or grandkids using it? There wasn't a single product in the room. And it raises the issue where people are actually, you know, when you sort of challenge them personally on some of these ideas, they reveal their discomfort. But as opposed to sort of, you know, working for any of the companies and designing certain things--it's sort of, you know, those things [don't] cross their mind, that actually they're designing it for masses in a certain way. So I think those are sort of key issues. And we have to recognize, as educators, that learning ethics is not magic. You know, learning ethics is not genetics, so to speak, that you will have certain individuals who would wake up one day and become ethical all of a sudden. And it is the responsibility of education institutions to pay as much attention to ethical learning as much as we are paying attention to business leadership and tech leadership and designing products, either for consumer orientation or for military and so on.

Elizabeth Renieris  22:23  
All great points that I strongly agree with. And I think, again, I think your lab is doing great work towards that end. I want to start to wrap up here by talking about another prominent feature of our times--namely, uncertainty. So we're speaking, you know, during a time of great uncertainty in every sense, from geopolitical uncertainty to uncertainty about how new and emerging technologies like AI will develop and unfold. Buddhism has a lot to say about uncertainty, right? So what can it teach us at this time? And how might it help us navigate some of this uncertainty?

Venerable Tenzin Priyadarshi  22:57  
I think what Buddhism is constantly reminding us is, uncertainty is reality. All the, I think, recent events with [the] pandemic and so on has done is perhaps increase our aperture to experience or accept that sense of uncertainty--not so much even experience, but accept it in certain ways. And so there are a bunch of sort of tools, I think--not just exclusive to Buddhism, but in terms of the contemplative mindset--that allows us to both adapt and embrace uncertainty. And I think we'll be better as humans, we'll be better as a society, we'll be better as a civilization, if we are able to sort of cultivate those tools to embrace uncertainty, and not try to sort of just lean towards a fixated view of things.

Elizabeth Renieris  23:49  
The Venerable Tenzin Priyadarshi, thank you so much for joining us today. It's been a real pleasure.

Venerable Tenzin Priyadarshi  23:54  
Likewise, thank you so much for having me.

Elizabeth Renieris  23:58
(voiceover) Tech on Earth is a production of the Notre Dame-IBM Technology Ethics Lab. For more, visit techethicslab.nd.edu.