tag:techethicslab.nd.edu,2005:/newsTech Ethics Lab | News2024-02-15T16:00:00-05:00tag:techethicslab.nd.edu,2005:News/1599712024-02-15T16:00:00-05:002024-02-20T10:22:47-05:00Tech Ethics Lab Announces Inaugural Cohort of Graduate Fellows<p>The Notre Dame-IBM Technology Ethics Lab is pleased to announce the selection of five Notre Dame graduate students for its Tech Ethics Graduate Fellowship program. After a competitive application and evaluation process, the students were selected based on their outstanding research proposals on topics…</p><p>The Notre Dame-IBM Technology Ethics Lab is pleased to announce the selection of five Notre Dame graduate students for its Tech Ethics Graduate Fellowship program. After a competitive application and evaluation process, the students were selected based on their outstanding research proposals on topics such as fair methods of evaluating human labor in the digital age, the efficacy of privacy policies, and the role of technological advancement in violent conflicts. </p>
<p>During their two-year fellowships, in addition to continuing their dissertation research, Graduate Fellows will form interdisciplinary research collaborations with Notre Dame faculty, mentor Notre Dame undergraduates, and develop their research into a suite of applied deliverables such as discussion-starting green papers.</p>
<p>“We’re very excited to welcome our first cohort of Graduate Fellows," said <a href="https://techethicslab.nd.edu/people/nuno-moniz/">Nuno Moniz</a>, Director of the Tech Ethics Lab and Associate Research Professor at the Lucy Family Institute for Data & Society. "They are a promising and diverse group of young researchers demonstrating impressive creative thinking capabilities. We’re honored to have them and look forward to witnessing and supporting their important research moving forward."</p>
<p>Graduate Fellows will receive a stipend of $5,000 per semester for participating in the fellowship program. They will be active in the intellectual life of the Tech Ethics Lab and the <a href="https://strategicframework.nd.edu/initiatives/ethics-initiative/">University Ethics Initiative</a> throughout their fellowships.</p>
<p>The Tech Ethics Graduate Fellows are:</p>
<ul>
<li>
<a href="https://techethicslab.nd.edu/people/noah-karger/">Noah Karger</a> - Ph.D. Student, Theology</li>
<li>
<a href="https://techethicslab.nd.edu/people/perla-khattar/">Perla Khattar</a> - J.S.D. Student, Law</li>
<li>
<a href="https://techethicslab.nd.edu/people/william-o-brien/">Will O’Brien</a> - Ph.D. Student, Peace Studies and History</li>
<li>
<a href="https://techethicslab.nd.edu/people/emma-schmidt/">Emma Schmidt</a> - Ph.D. Student, Political Science</li>
<li>
<a href="https://techethicslab.nd.edu/people/kesavan-thanagopal/">Kesavan Thanagopal</a> - Ph.D. Student, Philosophy </li>
</ul>
<p>A new call for Graduate Fellowships with the Technology Ethics Lab will launch at the end of the Spring 2024 semester. Learn more about fellowship programs at the Technology Ethics Lab at <a href="https://techethicslab.nd.edu/fellowships/">techethicslab.nd.edu/fellowships</a>.</p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1586452023-12-11T09:19:00-05:002023-12-11T09:19:46-05:00Postdoc Georgina Curto chairs IJCAI AI for Good Symposium<p>Notre Dame-IBM Technology Ethics Lab Postdoctoral Fellow <a href="https://techethicslab.nd.edu/people/georgina-curto-rex/"><strong>Georgina Curto</strong></a><strong> </strong>chaired the <a href="https://2023.sacair.org.za/ijcai-symposium/">International Joint Conference on Artificial Intelligence (IJCAI) AI for Good Symposium</a>…</p><p>Notre Dame-IBM Technology Ethics Lab Postdoctoral Fellow <a href="https://techethicslab.nd.edu/people/georgina-curto-rex/"><strong>Georgina Curto</strong></a><strong> </strong>chaired the <a href="https://2023.sacair.org.za/ijcai-symposium/">International Joint Conference on Artificial Intelligence (IJCAI) AI for Good Symposium in Southern Africa</a> in Johannesburg, South Africa on December 5 and 6, 2023. The symposium was the first of a series that will generate bidirectional lines of collaboration between the international AI research community and the vibrant AI for Good research ecosystem in the Global South, including a grassroots movement of young computer scientists focusing on impactful solutions for their local communities.</p>
<p>This first symposium gathered the representation of critical stakeholders in the African AI research ecosystem and specialists in informal settlements, sustainability, and climate change, with whom joint follow-up actions have been defined. The Johannesburg Recommendations for Applied AI and Social Good are to follow. Topics discussed during the symposium included the consultation of local communities in the definition of the goals and design of AI projects, the role of the African culture towards new ways of understanding fundamental AI research, and the need for alternative education models in computer science that can offer accreditation to the fast-growing young African population.</p>
<p>The panels included speakers such as <strong>Sibusisiwe Makhanya</strong> (Senior Research Manager, IBM Research Africa), <strong>Girmaw Abebe Tadesse</strong> (Principal Research Scientist and Manager, Microsoft AI for Good Research Lab), <strong>Vukosi Marivate</strong> (ABSA UP Chair of DATA Science, University of Pretoria),<strong> Avishkar Bhoopchand</strong> (Senior Research Engineer, DeepMind and Executive Boar and co-General Chair, 2023 Deep Learning Indaba) and <strong>Emma Ruttkamp-Bloem</strong> (Professor of Philosophy, University of Pretoria; Chair, UNESCO World Commission on the Ethics of Scientific Knowledge and Technology; Ethics of AI Lead, Center for AI Research; and member of the Advisory Board on AI to the UN Secretary-General).</p>
<p>“Africa is not a topic on the table regarding AI research. Africa is sitting at the discussion table and can lead the movement of applied AI research for social good internationally," said Georgina Curto.</p>
<p> </p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1576522023-11-01T09:33:00-04:002023-11-01T11:45:41-04:00ND-TEC and Notre Dame-IBM Technology Ethics Lab announce new leadership<p><a href="https://techethicslab.nd.edu/people/nuno-moniz/">Nuno Moniz</a>, associate research professor at the Lucy Family Institute for Data & Society, has been named managing director of <a href="https://techethicslab.nd.edu/">the Notre Dame-IBM Technology Ethics Lab</a>. He will report to Professor <a href="https://philosophy.nd.edu/people/faculty/meghan-sullivan/">Meghan</a>…</p><p><a href="https://techethicslab.nd.edu/people/nuno-moniz/">Nuno Moniz</a>, associate research professor at the Lucy Family Institute for Data & Society, has been named managing director of <a href="https://techethicslab.nd.edu/">the Notre Dame-IBM Technology Ethics Lab</a>. He will report to Professor <a href="https://philosophy.nd.edu/people/faculty/meghan-sullivan/">Meghan Sullivan</a>, who has been appointed acting director of the <a href="https://techethics.nd.edu/">Notre Dame Technology Ethics Center</a> (ND TEC).</p>
<p>Connecting some 30 faculty from more than a dozen academic units on campus, ND TEC was established in 2019 to advance interdisciplinary research and education concerned with the impact of technology on humanity.</p>
<p>As the applied arm of ND TEC, the Notre Dame-IBM Technology Ethics Lab promotes human values in technology through tangible, applied, and interdisciplinary research that addresses core ethical questions.</p>
<p>Moniz, who will also continue as the associate director for the <a href="https://lucyinstitute.nd.edu/centers-and-labs/data-inference-analytics-and-learning-dial-lab/">Data, Inference, Analytics, and Learning (DIAL) Lab</a>, studies machine learning, looking into problems such as rare event detection, responsible AI, data privacy and model interpretability. He is particularly interested in interdisciplinary efforts to understand the real-world impact of intelligent systems. Moniz received his Ph.D. in Computer Science from the University of Porto and previously worked as a senior researcher at the Porto’s Institute for Systems and Computer Engineering, Technology, and Science.</p>
<p>Sullivan is the Wilsey Family College Professor of Philosophy and director of the <a href="https://ndias.nd.edu/">Notre Dame Institute for Advanced Study</a>, a university-wide research institute that supports faculty, doctoral students, undergraduates and visiting fellows pursuing cross-disciplinary research on major ethical themes. She was <a href="https://strategicframework.nd.edu/news/notre-dame-announces-leadership-for-new-strategic-initiatives-on-democracy-ethics-and-poverty/">recently appointed to direct the Notre Dame Ethics Initiative</a>, which arose from the University’s new <a href="https://strategicframework.nd.edu/">Strategic Framework</a>. The initiative’s goal is to make Notre Dame a preeminent global destination for the study of ethics, offering rigorous training for future generations of ethicists and moral leaders, a platform for engagement of the Catholic moral tradition with other modes of inquiry, and an opportunity to forge insights into some of the most significant ethical issues of our time.</p>
<p>“Meghan has shown herself extremely capable of bringing together faculty from across disciplines to address matters of great importance for Notre Dame’s mission,” said Rev. Robert Dowd, C.S.C., the vice president and associate provost for interdisciplinary initiatives. “Outgoing director <a href="https://techethics.nd.edu/people/kirsten-martin/">Kirsten Martin</a>, the William P. and Hazel B. White Center Professor of Technology Ethics, has built a great foundation during her time as director and I have no doubt that Meghan will provide the leadership that will allow us to take our efforts in tech ethics to the next level.</p>
<p>“And we are very pleased to have someone of Nuno’s caliber managing the ND-IBM Tech Ethics Lab. He combines both technical knowledge and a deep interest in ethical questions pertaining to AI and other technological innovations. This is an exciting time for the ND-IBM Tech Ethics Lab.”</p>
<p>As he begins the new role, Moniz said he looked forward to working with the lab’s team of experts and enhancing collaboration with colleagues at IBM. “As we observe the growing opportunities and challenges of AI with regard to its widespread deployment and use in society, we need to redouble our attention to discussing, researching and promoting applied tech ethics," he said. “The Notre Dame-IBM Technology Ethics Lab can be a leader in the dynamic and historic moment we live in.”</p>Garry, Katetag:techethicslab.nd.edu,2005:News/1547902023-08-10T10:17:00-04:002023-10-02T10:54:50-04:00Postdoc Wins Outstanding Paper Award at ACL Workshop on Online Abuse and Harms<p><a href="https://techethics.nd.edu/people/georgina-curto-rex/">Georgina Curto Rex</a>, a postdoctoral fellow at the Notre Dame Technology Ethics Center (ND TEC), and several coauthors were awarded one of two outstanding paper awards at <a href="https://2023.aclweb.org/" target="_blank" rel="noopener">ACL 2023</a> (61st Annual Meeting of the Association for Computational Linguistics) within the <a href="https://www.workshopononlineabuse.com/programme.html" target="_blank" rel="noopener">Workshop on Online Abuse and Harms</a>.</p><figure class="image-right"><a href="https://techethics.nd.edu/people/georgina-curto-rex/"><img src="https://techethics.nd.edu/assets/523871/georgina_woah_presentation.jpg" alt="Georgina Curto Rex speaking in front of a podium" width="600" height="440"></a>
<figcaption>Georgina Curto Rex presenting her and her coauthors’ paper at the 2023 ACL Workshop on Online Abuse and Harms in Toronto.</figcaption>
</figure>
<p><a href="https://techethics.nd.edu/people/georgina-curto-rex/">Georgina Curto Rex</a>, a postdoctoral fellow at the Notre Dame Technology Ethics Center (ND TEC), and several coauthors were awarded one of two outstanding paper awards at <a href="https://2023.aclweb.org/" target="_blank" rel="noopener">ACL 2023</a> (61st Annual Meeting of the Association for Computational Linguistics) within the <a href="https://www.workshopononlineabuse.com/programme.html" target="_blank" rel="noopener">Workshop on Online Abuse and Harms</a>. The meeting took place in Toronto in July.</p>
<p>Their study, which was exploratory in nature, focused on aporophobia, a term coined by philosopher Adela Cortina meaning “rejection, aversion, fear, and contempt for the poor.” Titled <a href="https://aclanthology.org/2023.woah-1.12/" target="_blank" rel="noopener">“Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor,”</a> the paper examined several months’ worth of English-language posts on Twitter for terms and topics that regularly surface in tweets about people who are either “poor” or “rich.”</p>
<p>This analysis demonstrated the presence of aporophobic attitudes, with the 100 words that had the highest association with the group “poor” including many terms related to alcohol and drug abuse and mental disorders. The research team went on to show that most current natural language processing (NLP) models designed to identify social biases—for instance, discrimination against women or immigrants—in online text are much less successful at doing so when it comes to language disparaging poor people.</p>
<p>In calling for attention to aporophobia to be more explicitly added to NLP research on toxic language, the team noted that it should not be considered as wholly separate from but rather interrelated with racism, sexism, and xenophobia.</p>
<p>Curto joined ND TEC in summer 2022 after earning her Ph.D. in AI ethics in a joint program offered by the Universities of Ramon LLull (IQS School of Management), Deusto, and Pontificia Comillas (ICADE).</p>
<p>She is spending summer 2023 as a <a href="https://techethics.nd.edu/news-and-events/news/tech-ethics-postdoc-named-visiting-scholar-at-uc-berkeleys-kavli-center/">visiting scholar at the Kavli Center for Ethics, Science, and the Public</a> at the University of California, Berkeley, before returning to Notre Dame for the fall semester. Her position at Notre Dame is generously supported by the <a href="https://techethicslab.nd.edu/" target="_blank" rel="noopener">Notre Dame-IBM Tech Ethics Lab</a>.</p>
<p>Curto’s coauthors on the paper are Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen Fraser, all of the National Research Council Canada.</p>
<p class="attribution"><em>Originally published by <span class="rel-author">Notre Dame Technology Ethics Center</span> at <span class="rel-source"><a href="https://techethics.nd.edu/news-and-events/news/postdoc-wins-outstanding-paper-award-from-workshop-on-online-abuse-and-harms/">techethics.nd.edu</a></span> on August 10<span class="rel-pubdate">, 2023</span>.</em></p>Notre Dame Technology Ethics Centertag:techethicslab.nd.edu,2005:News/1542742023-06-30T09:11:00-04:002023-06-30T09:59:47-04:00Lab’s Auditing AI Workshop Brings Together CFP Recipients to Discuss Projects<p>Last August, the Notre Dame-IBM Technology Ethics Lab released its second annual Call for Proposals (CFP) for the purpose of funding practical and applied interdisciplinary research in tech ethics, ultimately selecting <a href="https://techethicslab.nd.edu/call-for-proposals/">19 projects from investigators representing nine</a>…</p><p>Last August, the Notre Dame-IBM Technology Ethics Lab released its second annual Call for Proposals (CFP) for the purpose of funding practical and applied interdisciplinary research in tech ethics, ultimately selecting <a href="https://techethicslab.nd.edu/call-for-proposals/">19 projects from investigators representing nine countries</a> for awards. Projects are being completed throughout 2023.</p>
<p>The theme for this CFP, Auditing AI, speaks to the need to rigorously examine artificial intelligence systems so as to take seriously the variety of concerns around them. These concerns can range from well-documented issues with current tools to future threats to the existence of humanity itself, even as the vast majority of people working in the field still consider human extinction as the direct result of AI to be in the realm of science fiction.</p>
<p>While award-winners from the first CFP had the chance to interact with each other through virtual sessions, this second cohort is the inaugural group to experience a new element of the program:</p>
<p>An on-campus workshop designed to allow recipients to connect in person, share progress on their projects, and learn from one another.</p>
<p>The workshop kicked off on Tuesday, June 13, with dinner and a keynote address from Marianna Bergamaschi Ganapini titled “Building an Infrastructure of Trust for Autonomous Systems.” An assistant professor of philosophy at Union College, Ganapini will begin a one-year term as a <a href="https://techethicslab.nd.edu/news/notre-dame-ibm-tech-ethics-lab-announces-visiting-fellows/">visiting fellow of the lab</a> on September 1.</p>
<figure class="image-left"><img src="https://techethicslab.nd.edu/assets/521339/hd_keynote_for_web.jpg" alt="Heather Domin addresses participants at the Auditing AI workshop" width="600" height="338">
<figcaption>IBM’s Heather Domin delivers a keynote on “Auditing AI in Practice” on day two of the workshop.</figcaption>
</figure>
<p>Day two started with a keynote on “Auditing AI in Practice” from Heather Domin, program director for AI Ethics and Tech Ethics by Design at IBM and the IBM associate director of the lab.</p>
<p>“The diversity of perspectives and issues highlighted by the group reflects the current complexity of auditing AI,” Domin said. “It is important that we remember why we audit AI and continue to press through the current challenges to help ensure trustworthy AI systems. The award recipients' work is also instrumental to understanding and identifying cutting-edge solutions and potential paths forward.”</p>
<p>Domin’s talk was followed by two morning sessions of presentations from CFP project principal investigators (PIs), an interactive keynote on “TRUST: Building a Sustainable Future for People and Machines” led by Michael Hemenway and Jeni Rinner of the Iliff School of Theology, and then two more rounds of presentations by project PIs in the afternoon.</p>
<p>After Wednesday night’s dinner, attendees engaged in an open information-sharing and brainstorming plenary session. Over the course of the workshop, there were also opportunities for informal networking and tours of the Notre Dame campus.</p>
<p>Among the PIs who presented were Sharanya Shanmugam, Willow Wong, and Zhang Wenxi of the Centre for AI and Data Governance at Singapore Management University (SMU). Together with their SMU colleague Mark Findlay, they are pursuing a project on “AI Audits for Whom? Asian Perspectives on Rebuilding Public Trust via Community Ethics and Conflict Resolution Mechanisms.”</p>
<p>“It was truly a joy to converse with researchers from such a diverse range of academic backgrounds, all seeking to ground the purpose of auditing AI within the unique contexts of their domains,” Wong said. “As an early career researcher, I felt very supported in sharing my thoughts and receiving generous feedback from the other participants during the workshop. I also appreciated the opportunity to have more informal conversations over a meal, which gave everyone a chance to think about the intricate relationship between technology and society outside of—quite literally—the seminar room.”</p>
<p>This CFP is part of the lab’s broader <a href="https://techethicslab.nd.edu/auditing-ai-initiative/">Auditing AI Initiative</a> that also has led to the creation of an <a href="https://techethicslab.nd.edu/news/from-hacking-to-ai-audits-tech-ethics-courses-let-students-explore-timely-topics/">undergraduate course on the topic</a> that will be offered for the first time this fall and can be counted toward <a href="https://techethics.nd.edu/education/tech-ethics-minor/" target="_blank" rel="noopener">Notre Dame’s undergraduate minor in tech ethics</a>.</p>
<p>Project descriptions for all 19 projects are available at <a href="https://techethicslab.nd.edu/call-for-proposals/">techethicslab.nd.edu/call-for-proposals</a>. The lab’s next Call for Proposals and instructions for applying for funding will be published later this summer.</p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1538752023-06-09T15:55:00-04:002023-06-12T09:22:31-04:00Event Replay: Notre Dame-IBM Tech Ethics Lab Symposium on Foundation Models in AI<p>ChatGPT and generative AI tools like it have captured the public’s attention in recent months, driving ongoing debate about their potential to help or harm individuals and society. </p> <p>Less often discussed, though, is the ethical use of foundation models, which underlie many generative AI applications…</p><p>ChatGPT and generative AI tools like it have captured the public’s attention in recent months, driving ongoing debate about their potential to help or harm individuals and society. </p>
<p>Less often discussed, though, is the ethical use of foundation models, which underlie many generative AI applications in enterprises.</p>
<p>On June 1, the <a href="https://techethicslab.nd.edu/">Notre Dame-IBM Technology Ethics Lab</a> hosted a virtual <a href="https://techethicslab.nd.edu/events/2023/06/01/symposium-on-the-ethical-use-of-foundation-models-in-enterprises/">Symposium on the Ethical Use of Foundation Models in Enterprises</a>, covering issues such as how organizations have already been using these powerful technologies, what ways they might be deployed in the future, and how we ensure this type of AI actually serves to make our lives better.</p>
<p>Drawing on a variety of perspectives from academia, civil society, and industry, the symposium was divided into two parts, each initiated with a keynote address and followed by a panel discussion.</p>
<p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube-nocookie.com/embed/kIZGrqoiGZ8" title="YouTube video player" width="560"></iframe></p>
<p>The first half of the event focused on the fundamentals of foundation models in order to better understand them and how they’re used in enterprise contexts.</p>
<ul>
<li>
<strong>Arvind Karunakaran</strong> of Stanford University delivered a keynote titled “Foundation Models in the Workplace: Implications for Organizations and Governance” (2:10 in the video) highlighting some of the ways foundation models were being deployed in business prior to all the headlines surrounding tools such as ChatGPT. His talk drew on his own research involving a law firm that used a “lawbot” to automate certain tasks.</li>
<li>IBM’s <strong>Saishruthi Swaminathan</strong> then moderated a panel with <strong>Alex Engler</strong> of The Brookings Institution and <strong>Manish Goyal</strong> of IBM Consulting (26:08). They shared examples illustrating how foundation models may be used in specific applications of generative AI and both the uses and current limitations of these rapidly advancing tools, with Goyal noting he’d never seen a level of innovation like we’ve experienced over the last six months or so.</li>
</ul>
<p>Engler also began to touch on how regulators might govern the use of foundation models, leading directly into the second part of the symposium, where the primary topic was the ethical challenges they raise, particularly as they are adopted in enterprise settings.</p>
<ul>
<li>In her keynote “Generative AI’s Ethical Debt” (1:12:37), <strong>Casey Fiesler</strong> of the University of Colorado Boulder encouraged attendees to view thoughtful and informed critique of technology as an obligation. She called for more diversity among not only those who are developing tech but also those who are analyzing its consequences. Fiesler, who will be a <a href="https://techethicslab.nd.edu/news/notre-dame-ibm-tech-ethics-lab-announces-visiting-fellows/">visiting fellow at the lab</a> beginning July 1, also emphasized that there are plenty of challenges from AI to address right now without waiting for a science fiction-like crisis (e.g., super intelligence) with which to contend.</li>
<li>
<strong>Cody Turner</strong>, a tech ethics postdoctoral fellow at Notre Dame, moderated the day’s second panel, which featured <strong>Pin-Yu Chen</strong> of IBM Research and <strong>Triveni Gandhi</strong> of Dataiku (1:36:42). They discussed topics including the necessity of defining human values before you can develop responsible AI, the importance of the entire AI life cycle for creating tools that do what we want them to, and the problem with treating interactions with artificial intelligence systems like we’re interacting with a human.</li>
</ul>
<p class="text-left"><a class="btn btn-more" href="https://techethicslab.nd.edu/events/2023/06/01/symposium-on-the-ethical-use-of-foundation-models-in-enterprises/">Speaker Bios</a></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1528912023-05-01T09:55:00-04:002023-05-01T09:55:14-04:00Notre Dame-IBM Tech Ethics Lab Announces Visiting Fellows<p>The <a href="https://techethicslab.nd.edu/">Notre Dame-IBM Tech Ethics Lab</a> has named <a href="https://www.colorado.edu/cmci/people/information-science/casey-fiesler" target="_blank">Casey Fiesler</a> and <a href="https://www.union.edu/philosophy/faculty-staff/marianna-bergamaschi-ganapini" target="_blank">Marianna Bergamaschi Ganapini</a> visiting fellows, with their appointments effective July 1 and September 1, respectively. Running for a term of 12 months, the lab's visiting fellowships are intended to help fund the tech ethics work of outstanding scholars on leave from their academic institutions.</p><p>The <a href="https://techethicslab.nd.edu/">Notre Dame-IBM Tech Ethics Lab</a> has named <a href="https://www.colorado.edu/cmci/people/information-science/casey-fiesler" target="_blank">Casey Fiesler</a> and <a href="https://www.union.edu/philosophy/faculty-staff/marianna-bergamaschi-ganapini" target="_blank">Marianna Bergamaschi Ganapini</a> visiting fellows, with their appointments effective July 1 and September 1, respectively. Running for a term of 12 months, the lab's visiting fellowships are intended to help fund the tech ethics work of outstanding scholars on leave from their academic institutions.</p>
<figure class="image-right"><img alt="Casey Fiesler" height="300" src="https://techethicslab.nd.edu/assets/513610/fullsize/casey_fiesler_for_web.jpg" width="300">
<figcaption>Casey Fiesler</figcaption>
</figure>
<p>Fiesler is an associate professor in the Department of Information Science (and Computer Science, by courtesy) at the University of Colorado Boulder. The director of the Internet Rules Lab, she is particularly interested in big data research ethics, ethics education, ethical speculation in technology design, technology empowerment for marginalized communities, and broadening participation in computing. Much of Fiesler’s work is supported by the National Science Foundation, Mozilla, and Omidyar, and she gave one of the keynote addresses at last fall’s <a href="https://techethicslab.nd.edu/news/looking-back-at-the-rome-call-for-ai-ethics-a-global-university-summit/">global university summit on the Rome Call for AI Ethics</a> hosted by the lab.</p>
<p>She is also a public scholar, with her research frequently covered in the media, and her project at the lab, titled “AI Ethics for All,” seeks to make engaging and accessible AI and tech ethics content available for everyone. This will include producing videos for TikTok—where she has more than 100,000 followers and has already created a <a href="https://docs.google.com/document/d/1tWdqYqYBHARbZXFQX4cybe88S-0twqvUu1xLhYnLgU4/edit" target="_blank">TikTok-based tech ethics class</a>—and YouTube as well as visiting schools and interacting with the computer science community to encourage more attention to public scholarship and basic algorithmic literacy education.</p>
<p>“Over the past few years, I’ve had the opportunity to teach and engage with many more people than I typically reach in the classroom and have seen how much general interest there is in learning about issues of technology ethics and justice,” said Fiesler, who holds a Ph.D. in human-centered computing from Georgia Tech and a JD from Vanderbilt University. “Though it’s important this content is taught in classrooms as well, it can also be fun, engaging, and occasionally show up in between dance videos on someone’s TikTok feed or recommended after a video game stream on YouTube. I’m so pleased to have the opportunity during my sabbatical to be able to spend more time engaging in public communication around technology ethics.”</p>
<figure class="image-left"><img alt="Marianna Bergamaschi Ganapini" height="300" src="https://techethicslab.nd.edu/assets/513794/fullsize/marianna_ganapini.jpeg" width="300">
<figcaption>Marianna Bergamaschi Ganapini</figcaption>
</figure>
<p>An assistant professor of philosophy at Union College, Ganapini spent this spring as a visiting scholar at the New York University Center for Bioethics. Primarily focused on philosophy of mind, epistemology (i.e., how human knowledge works), and technology ethics, she has related interests in the epistemology and ethics of AI. She is part of the “Thinking Fast and Slow in AI” project, which is led by IBM Research in collaboration with several academic partners and seeks to leverage cognitive theories of human decision-making to advance artificial intelligence.</p>
<p>Ganapini’s project, “New Tools for Ethical Risk Analysis and Risk Mitigation in the Use of AI,” is the second that she’s pursued through the lab. She previously developed an <a href="https://arxiv.org/abs/2304.14338" target="_blank">“Audit Framework for Adopting AI-Nudging on Children”</a> with support from the <a href="https://techethicslab.nd.edu/call-for-proposals/2021-cfp/">2021–22 Call for Proposals</a> (CFP).</p>
<p>“I am extremely excited to join the lab for a year, as it is a stimulating environment with a cutting-edge vision about the role of ethical thinking in the development of new technologies,” said Ganapini, who received her Ph.D. from Johns Hopkins University. “My goal is to work on the role of risk mitigation in preventing harm: How can organizations determine which risks their artificial intelligence poses to people and society? And how can they find actionable ways to address and prevent those risks? I will tackle these questions by producing a practical, step-by-step tool that will guide companies through risk assessment and mitigation connected to their use of AI. I will use a methodology that draws from philosophy and ethical theory and the work already being done in AI ethics auditing.”</p>
<p>The Notre Dame-IBM Tech Ethics Lab is the applied arm of the <a href="https://techethics.nd.edu/" target="_blank">Notre Dame Technology Ethics Center</a>. Established in 2020 as a partnership between the University of Notre Dame and IBM, the lab is funded by a 10-year, $20 million IBM commitment.</p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1522562023-04-06T11:37:00-04:002023-04-06T11:37:45-04:00From Hacking to AI Audits, Tech Ethics Courses Let Students Explore Timely Topics<p>On a gray afternoon in February, <a href="https://techethics.nd.edu/people/luis-felipe-rosado-murillo/">Luis Felipe Murillo</a> puts a slide up on the classroom screen before beginning his lesson. It’s a title card for the course, which pops far more than the gloomy weather outside the window:…</p><p>On a gray afternoon in February, <a href="https://techethics.nd.edu/people/luis-felipe-rosado-murillo/">Luis Felipe Murillo</a> puts a slide up on the classroom screen before beginning his lesson. It’s a title card for the course, which pops far more than the gloomy weather outside the window:</p>
<p>“<a href="https://computerhackerscourse.nd.edu/" target="_blank">The Archaeology of Hacking</a>: Everything You Wanted to Know About Hacking but Were Afraid to Ask.”</p>
<p>Murillo, an anthropologist, is team-teaching the course this semester with <a href="https://techethics.nd.edu/people/walter-scheirer/">Walter Scheirer</a>, a computer scientist. Both are faculty affiliates of the <a href="https://techethics.nd.edu/">Notre Dame Technology Ethics Center</a> (ND TEC).</p>
<p>The topic on this particular day? Video game hacking.</p>
<p>“In anthropology we study the interplay of technologies in our social lives, but also, and this is the key part, how we are ‘shaped’ by the digital toys, tools, and infrastructures that we build,” says Murillo, who researches questions of ethics, openness, sharing, and collaboration in contemporary science and technology projects.</p>
<p>“This is something we do in this course by bringing anthropological and computational perspectives to interpret the history of hacking as it intersects with the history of electronic games and game consoles, but also with communication technologies that mediate people’s professional, familial, and educational—as well as spiritual and personal—experiences today.”</p>
<p>By this point in the semester, students have already had their own elementary experience of hacking via a technical homework assignment that called on them to modify a game for the original Nintendo Entertainment System.</p>
<p>They are also at work on a semester-long group project where they pick a hacking event in the news to examine in depth. One of the things that Scheirer and Murillo look for the groups to dig into is the ethical implications of the hacks they’re studying so that they can move beyond mainstream conceptions.</p>
<p>“Hacking has always been about pushing the boundaries of technology,” says Scheirer, a specialist in media forensics, computer security, data security, human biometrics, and digital privacy. “And while sometimes the outcome is good, other times it isn't. Understanding that dichotomy in social, legal, and technological terms helps students grasp the big picture when it comes to technology ethics.”</p>
<h4>The Minor in Technology Ethics</h4>
<figure class="image-right"><img alt="the words Undergraduate Minor in Tech Ethics next to a globe emanating a beam of light and lines of computer code and covered in various digital images" height="338" src="https://techethics.nd.edu/assets/469293/undergraduate_minor_1600x900_.jpg" width="600"></figure>
<p>Cross-listed with the Departments of <a href="https://anthropology.nd.edu/" target="_blank">Anthropology</a> and <a href="https://cse.nd.edu/" target="_blank">Computer Science and Engineering</a>, “The Archaeology of Hacking” has been approved as an integration course for the University’s <a href="https://corecurriculum.nd.edu/" target="_blank">Core Curriculum</a> and can be counted toward several undergraduate minors at the University, including ND TEC’s in technology ethics that was launched last year.</p>
<p>Open to students from all colleges and schools at Notre Dame, the <a href="https://techethics.nd.edu/education/tech-ethics-minor/">tech ethics minor</a> consists of five courses totaling 15 credits, broken up into a required gateway course, a required advanced seminar on a current issue in tech ethics, and three electives. Electives are taught by faculty members from various disciplines—but especially those from the College of Arts and Letters, College of Engineering, Mendoza College of Business, and Keough School for Global Affairs.</p>
<p>“The Archaeology of Hacking” is one of nine courses being taught this semester designated as a tech ethics elective. Scheirer and Murillo tentatively plan to offer the class again next spring.</p>
<h4>A New Elective: “Auditing AI”</h4>
<p>A new elective debuting this fall is “Auditing AI: An Introduction.” Developed with support from the <a href="https://techethicslab.nd.edu/" target="_blank">Notre Dame-IBM Tech Ethics Lab</a>, ND TEC’s applied arm, the course is part of the lab’s broader Auditing AI Initiative, through which it also sponsored its <a href="https://techethicslab.nd.edu/news/notre-dame-ibm-tech-ethics-lab-announces-award-winners-from-second-annual-cfp/" target="_blank">second annual call for proposals</a> and awarded a total of more than $930,000 to 19 research projects.</p>
<p>Ju Yeon Jung, a postdoctoral fellow at the lab, and Cameron Kormylo, a research associate in the Department of <a href="https://mendoza.nd.edu/research-faculty/academic-departments/information-technology-analytics-operations/" target="_blank">Information Technology, Analytics, and Operations</a>, designed the “Auditing AI” course and will teach it together. Jung notes how the increasingly widespread use of AI makes the course timely.</p>
<blockquote>
<p>“Students from both technical and non-technical disciplines will work in groups to apply an AI auditing toolkit to assess a range of AI systems developed from real-world datasets and examples.” —Ju Yeon Jung</p>
</blockquote>
<p>“AI audits are emerging as an important solution to systematically identify, evaluate, and address risks associated with AI systems,” Jung says. “This course will offer an interdisciplinary introduction to AI auditing, viewing AI systems as sociotechnical systems that need to be responsive to diverse stakeholders. Students from both technical and non-technical disciplines will work in groups to apply an AI auditing toolkit to assess a range of AI systems developed from real-world datasets and examples.”</p>
<p>In addition to the “Auditing AI” class, the Notre Dame-IBM Tech Ethics Lab is creating a practicum course where student teams will get real-world experience on a capstone project under the guidance of a Notre Dame faculty member and a professional from IBM. The practicum is expected to be offered for the first time in Spring 2024 and will count toward the tech ethics minor. </p>
<h4>Tech Ethics Gateway Course as a Second Philo</h4>
<p>The tech ethics minor’s gateway course, “Fundamentals of Technology Ethics and Society,” is one that is taught every fall and spring semester. Covering topics such as bias and fairness in algorithms, privacy, data governance and civil liberties, surveillance and power, social media, and the ethics of artificial intelligence, it can be taken as a student’s second philosophy course within the Core Curriculum.</p>
<p>Clayton O’Dell, a sophomore majoring in computer science, says she decided to pursue the tech ethics minor so she’s prepared to ask the kinds of questions about technology that will allow us to build a better future. She’s found the gateway course to be valuable in that regard.</p>
<blockquote>
<p>“Students from a range of disciplines help shape the course by providing unique perspectives about moral decisions.” —Clayton O’Dell</p>
</blockquote>
<p>“The ‘Fundamentals of Technology Ethics and Society’ is a fascinating course that uses reading assignments to draw students into spirited dialogue on a variety of ethical issues,” O’Dell says. “Students from a range of disciplines help shape the course by providing unique perspectives about moral decisions. Through the course, I’ve enjoyed being challenged to listen and learn from others, allowing me to strengthen and better inform my own beliefs.”</p>
<p>Undergraduate tech ethics courses being offered in fall 2023 are listed below. More information about any of them can be found on Notre Dame’s <a href="https://classsearch.nd.edu/" target="_blank">Class Search website</a> by going to the “Any Department” dropdown menu and selecting “Technology Ethics.” Details about the minor in tech ethics are available at <a href="https://techethics.nd.edu/minor">techethics.nd.edu/minor</a>.</p>
<table>
<caption><strong>Undergraduate Tech Ethics Courses: Fall 2023</strong></caption>
<thead>
<tr>
<th scope="col" style="background-color: rgb(211, 159, 16); text-align: center;">Course</th>
<th scope="col" style="background-color: rgb(211, 159, 16); text-align: center;">Instructor</th>
<th scope="col" style="background-color: rgb(211, 159, 16); text-align: center;">Schedule</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<strong>Fundamentals of Technology Ethics and Society</strong><br>
(TEC 20101; counts as second philosophy)</td>
<td style="vertical-align:middle"><a href="https://techethics.nd.edu/people/warren-von-eschenbach/">Warren von Eschenbach</a></td>
<td>TTh<br>
11:00 a.m.–12:15 p.m.</td>
</tr>
<tr>
<td>
<strong>Fundamentals of Technology Ethics and Society</strong><br>
(TEC 20101; counts as second philosophy)</td>
<td style="vertical-align:middle"><a href="https://techethics.nd.edu/people/cody-turner/">Cody Turner</a></td>
<td>MW<br>
2:00 p.m.–3:15 p.m.</td>
</tr>
<tr>
<td>
<strong>Science, Technology, & Society</strong><br>
(TEC 20112/STV 20556; counts as second philosophy or writing-intensive course)</td>
<td style="vertical-align:middle"><a href="https://reilly.nd.edu/people/faculty/anna-geltzer/" target="_blank">Anna Geltzer</a></td>
<td>TTh<br>
12:30 p.m.–1:45 p.m.</td>
</tr>
<tr>
<td>
<strong>The Language of Science</strong><br>
(TEC 23201/STV 23201)</td>
<td style="vertical-align:middle">Sahana Srinivasan</td>
<td>TTh<br>
9:30 a.m.–10:45 a.m.</td>
</tr>
<tr>
<td>
<strong>Auditing AI: An Introduction</strong><br>
(TEC 30114/CDT 30614)</td>
<td style="vertical-align:middle"><a href="https://mendoza.nd.edu/mendoza-directory/profile/cameron-kormylo/" target="_blank">Cam Kormylo</a></td>
<td>MW<br>
11:00 a.m.–12:15 p.m.</td>
</tr>
<tr>
<td>
<strong>Generative AI in the Wild</strong><br>
(TEC 30750/CDT 30750)</td>
<td style="vertical-align:middle">
<p><a href="https://techethics.nd.edu/people/ranjodh-singh-dhaliwal/">Ranjodh Singh Dhaliwal</a><br>
<a href="https://learning.nd.edu/who-we-are/team-bios/john-behrens/" target="_blank">John Behrens</a></p>
</td>
<td>MW<br>
2:00 p.m.–3:15 p.m.</td>
</tr>
<tr>
<td>
<strong>Internet Ethics</strong><br>
(TEC 33997/CDT 30797; counts as second philosophy)</td>
<td style="vertical-align:middle"><a href="https://techethics.nd.edu/people/cody-turner/">Cody Turner</a></td>
<td>MW<br>
11:00 a.m.–12:15 p.m.</td>
</tr>
</tbody>
</table>
<p class="attribution"><em>Originally published by <span class="rel-author">Notre Dame Technology Ethics Center</span> at <span class="rel-source"><a href="https://techethics.nd.edu/news-and-events/news/from-hacking-to-ai-audits-tech-ethics-courses-let-students-explore-timely-topics/">techethics.nd.edu</a></span> on <span class="rel-pubdate">April 06, 2023</span>.</em></p>Notre Dame Technology Ethics Centertag:techethicslab.nd.edu,2005:News/1519982023-03-27T17:18:00-04:002023-03-27T17:18:30-04:00Tech Ethics Postdoc Named Visiting Scholar at UC Berkeley’s Kavli Center <p><a href="https://techethics.nd.edu/people/georgina-curto-rex/">Georgina Curto Rex</a>, a postdoctoral fellow at the <a href="https://techethics.nd.edu/">Notre Dame Technology Ethics Center</a> (ND TEC) who specializes in applied AI, has been appointed as a visiting scholar at the <a href="https://kavlicenter.berkeley.edu/" target="_blank">Kavli Center for Ethics, Science, and the Public</a> at the University of California, Berkeley.</p><p><a href="https://techethics.nd.edu/people/georgina-curto-rex/">Georgina Curto Rex</a>, a postdoctoral fellow at the <a href="https://techethics.nd.edu/">Notre Dame Technology Ethics Center</a> (ND TEC) who specializes in applied AI, has been appointed as a visiting scholar at the <a href="https://kavlicenter.berkeley.edu/" target="_blank">Kavli Center for Ethics, Science, and the Public</a> at the University of California, Berkeley. Her appointment will begin in mid-May and run until shortly before the fall 2023 semester starts at Notre Dame.</p>
<p>The mission of the Kavli Center is to provide an inclusive, democratic, and multidisciplinary framework for understanding the ethical implications of science and technology. Curto will use her time in residence to present her work to the Kavli Center community while exploring opportunities for ongoing collaboration with the center’s scholars.</p>
<p>Focusing on issues of fairness and inclusion, she pursues projects that aim to create new paths for poverty reduction by taking advantage of the insights offered by AI, design AI systems that counteract inequality, and, more broadly, advance interdisciplinary research towards the achievement of the UN Sustainable Development Goals (SDGs).</p>
<p>Curto joined ND TEC in summer 2022 after earning her Ph.D. in AI ethics in a joint program offered by the Universities of Ramon LLull (IQS School of Management), Deusto, and Pontificia Comillas (ICADE). Her position at Notre Dame is generously supported by the <a href="https://techethicslab.nd.edu/" target="_blank">Notre Dame-IBM Tech Ethics Lab</a>.</p>
<p class="attribution"><em>Originally published by <span class="rel-author">Notre Dame Technology Ethics Center</span> at <span class="rel-source"><a href="https://techethics.nd.edu/news-and-events/news/tech-ethics-postdoc-named-visiting-scholar-at-uc-berkeleys-kavli-center/">techethics.nd.edu</a></span> on <span class="rel-pubdate">March 27, 2023</span>.</em></p>Notre Dame Technology Ethics Centertag:techethicslab.nd.edu,2005:News/1499192022-12-16T09:34:00-05:002022-12-16T09:34:09-05:00Tech Ethics Postdoctoral Fellows Program Advances Emerging Voices in the Field<p>Ask someone to describe the academic work at the heart of a university, and you’re likely to get an answer focused on faculty research, student learning, or a combination of the two.</p> <p>Postdoctoral fellows rarely garner much attention in this type of conversation. And yet the scholars who hold…</p><p>Ask someone to describe the academic work at the heart of a university, and you’re likely to get an answer focused on faculty research, student learning, or a combination of the two.</p>
<p>Postdoctoral fellows rarely garner much attention in this type of conversation. And yet the scholars who hold these positions play an integral role in allowing a university to fulfill both its research and educational missions.</p>
<p>That’s why the Notre Dame Technology Ethics Center (ND TEC) earlier this year prioritized the creation of the <a href="https://techethics.nd.edu/about/technology-ethics-postdoctoral-fellows-program/">Technology Ethics Postdoctoral Fellows Program</a>, appointing <a href="https://techethics.nd.edu/people/georgina-curto-rex/">Georgina Curto Rex</a>, <a href="https://techethics.nd.edu/people/cody-turner/">Cody Turner</a>, and <a href="https://techethics.nd.edu/people/carolina-villegas-galaviz/">Carolina Villegas-Galaviz</a> as its inaugural cohort of fellows.</p>
<p>“We couldn’t be happier that Georgina, Cody, and Carolina accepted our offer to join ND TEC,” said <a href="https://techethics.nd.edu/people/kirsten-martin/">Kirsten Martin</a>, the center’s director as well as William P. and Hazel B. White Center Professor of Technology Ethics and a professor of IT, analytics, and operations in Notre Dame’s Mendoza College of Business. “The opportunity to host such outstanding early career scholars for a year or two around the time they have completed or are completing their Ph.D. enlivens our research community and expands the number and types of courses we can make available to our undergraduate students. We get so much out of having our postdocs here, and our goal is for them to get just as much out of their time at Notre Dame.”</p>
<p>The way in which ND TEC connects an interdisciplinary group of <a href="https://techethics.nd.edu/people/">faculty affiliates</a> is one of the aspects of the program the fellows find most valuable.</p>
<figure class="image-right"><img alt="Cody Turner" height="600" src="https://techethics.nd.edu/assets/493589/cody_turner_updated_for_web.jpg" width="600">
<figcaption>Cody Turner</figcaption>
</figure>
<p>“The ability to communicate and collaborate with scholars from other academic disciplines who are also researching emerging technologies has functioned to broaden my conceptual horizons and, in the case of scholars from the hard sciences, help keep my research empirically grounded,” said Turner, who earned his Ph.D. in philosophy from the University of Connecticut. “Beyond ND TEC, I have also found the <a href="https://cds.library.nd.edu/" target="_blank">Navari Family Center for Digital Scholarship</a> in the Hesburgh Library to be especially valuable for my teaching and research endeavors.”</p>
<p>While postdocs may contribute to faculty projects, they spend much of their time pursuing their own lines of research, which they then have the chance to share through lunchtime workshops. Turner, for instance, is examining how emerging wearable and implantable AI assistant devices—e.g., smartwatches, smart glasses, smart contact lenses, and neural implants—are poised to affect the human mind from a metaphysical, ethical, and epistemological perspective.</p>
<p>Curto, who received her Ph.D. in AI ethics in a joint program offered by the Universities of Ramon LLull (IQS School of Management), Deusto, and Pontificia Comillas (ICADE Business School), focuses on issues of fairness and inclusion.</p>
<p>“I am using artificial intelligence to find alternative ways to mitigate poverty and discrimination,” she said. “Poverty reduction policies based on the redistribution of wealth have proved insufficient in recent decades, and artificial intelligence offers non-invasive ways to explore the impact of a new generation of policies, contributing to Goal 1 of the UN Sustainable Development Goals.”</p>
<figure class="image-left"><img alt="Georgina Curto Rex" height="600" src="https://techethics.nd.edu/assets/486989/georgina_curto_rex_for_web.jpeg" width="600">
<figcaption>Georgina Curto Rex</figcaption>
</figure>
<p>Curto’s and Turner’s positions are funded by the <a href="https://techethicslab.nd.edu/" target="_blank">Notre Dame-IBM Tech Ethics Lab</a>, the applied arm of ND TEC and the center’s partner in developing the postdoc program. As part of the fellowship, they will both teach an undergraduate seminar in the spring semester that will count toward ND TEC’s <a href="https://techethics.nd.edu/education/tech-ethics-minor/">minor in tech ethics</a>. Curto’s course is called “AI for Good.”</p>
<p>“I am very excited about it because the students attending the course come from different disciplines, and I am sure that, working together, we can come up with projects that provide real-world local solutions to global challenges.”</p>
<p>Even if students aren’t pursuing the minor, they can use Turner’s class, “Internet Ethics,” to fulfill Notre Dame’s requirement for a second course in philosophy.</p>
<p>“Topics we’ll cover include, but are not limited to, internet censorship, surveillance capitalism, echo chambers, fake news, online shaming, online anonymity, the digital divide, the right to be forgotten, the ethics of hacking, the metaverse, and intellectual property rights in the digital age,” he said.</p>
<p>Villegas-Galaviz’s path to Notre Dame was a little different from those of Turner and Curto. Specializing in business ethics, AI ethics, and the ethics of care, she received her doctorate from ICADE Business School at Pontificia Comillas but spent the last year of her Ph.D. studies in residence at Notre Dame, with ND TEC Director Martin serving as one of her dissertation advisors. In addition, the two have collaborated on several papers, and Villegas-Galaviz contributed a chapter to Martin’s recent book <a href="https://techethics.nd.edu/news-and-events/news/new-anthology-by-nd-tec-director-kirsten-martin-explores-ethics-of-data-and-analytics/"><em>Ethics of Data and Analytics: Concepts and Cases</em></a>.</p>
<figure class="image-right"><img alt="Carolina Villegas-Galaviz" height="600" src="https://techethics.nd.edu/assets/486990/carolina_villegas_galaviz_for_web.jpeg" width="600">
<figcaption>Carolina Villegas-Galaviz</figcaption>
</figure>
<p>“The focus of my research is on the ethical implications of the introduction of AI to firms,” said Villegas-Galaviz, whose position is supported by a grant from Microsoft. “To understand the morality within AI, there needs to be a comprehensive study that goes beyond social impact and centers on philosophical analysis. Such an approach focuses on those who design, develop, and deploy AI and the ethical issues they may encounter in their role in those processes.”</p>
<p>She and Martin will each teach two sections of “Ethics of Data Analytics” for undergraduates in the spring.</p>
<p>“ND TEC-affiliated faculty have worked on technology ethics for years,” Villegas-Galaviz said. “The value that this has for emerging scholars is incomparable. Here we receive feedback, help, and guidance from people who really know our fields, understand issues from a comprehensive perspective, and want the best for our academic careers.”</p>
<p>The tech ethics postdoctoral fellowships are open to individuals (a) enrolled in either a doctoral program or in a graduate program that leads to a terminal degree in that field (e.g., law) or (b) a recent graduate of such a program (within two years). We anticipate next seeking applications in fall 2023.</p>
<p class="attribution">Originally published by <span class="rel-author">Notre Dame Technology Ethics Center</span> at <span class="rel-source"><a href="https://techethics.nd.edu/news-and-events/news/tech-ethics-postdoctoral-fellows-program-advances-emerging-voices-in-the-field/">techethics.nd.edu</a></span> on <span class="rel-pubdate">December 16, 2022</span>.</p>Notre Dame Technology Ethics Centertag:techethicslab.nd.edu,2005:News/1498132022-12-15T09:30:00-05:002023-05-03T11:24:16-04:00Notre Dame-IBM Tech Ethics Lab Announces Award Winners From Second Annual CFP<p>Following the release of its second annual <a href="https://techethicslab.nd.edu/call-for-proposals/">Call for Proposals (CFP)</a> in August, the Notre Dame-IBM Technology Ethics Lab today (Dec. 15) announced nearly 20 projects that have been selected for awards. The lab will provide more than $930,000 in total funding to this…</p><p>Following the release of its second annual <a href="https://techethicslab.nd.edu/call-for-proposals/">Call for Proposals (CFP)</a> in August, the Notre Dame-IBM Technology Ethics Lab today (Dec. 15) announced nearly 20 projects that have been selected for awards. The lab will provide more than $930,000 in total funding to this year’s winners.</p>
<p>The 2022–23 CFP is focused on the theme of “Auditing AI,” with the call making up the research component of the Lab’s broader <a href="https://techethicslab.nd.edu/auditing-ai-initiative/">Auditing AI Initiative</a>. Project teams, who represent nine countries and every continent but Antarctica, will produce deliverables such as training modules, proof-of-concept tools, and audit frameworks related to the use of AI in areas ranging from education and medicine to hiring processes and the delivery of social services.</p>
<p>“The Notre Dame-IBM Technology Ethics Lab is thrilled to support a distinguished set of projects that will define the newly emerging field of AI auditing," said <a href="https://techethicslab.nd.edu/people/erin-flynn-klawitter/">Erin Klawitter</a>, the lab’s associate director. “We believe the deliverables from these proposals will clarify best practices that will support practitioners and policymakers as they seek to develop and regulate artificial intelligence.”</p>
<p>“To unlock the potential of data to change our world for the better, society must maintain trust in AI systems,” said Betsy Greytok, IBM Vice President, Ethics & Policy, and co-director of the Notre Dame-IBM Tech Ethics Lab. “AI auditing is a key component of a holistic AI strategy that helps identify any risks such a system may pose and helps to maximize fairness and accuracy.”</p>
<p>Projects will be undertaken and completed throughout 2023, and final deliverables will be accessible through the lab’s website.</p>
<p class="text-left"><a class="btn btn-more" href="https://techethicslab.nd.edu/call-for-proposals/">Read Abstracts of the Projects</a></p>
<p>The lab’s inaugural CFP, which awarded more than $500,000 in funding to 27 proposals this past January, focused on six core themes related to the ethics of: scale, automation, identification, prediction, persuasion, and adoption. Deliverables from those projects will be linked from the <a href="https://techethicslab.nd.edu/call-for-proposals/2021-cfp/">2021–22 CFP page</a> in the coming months.</p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1489062022-11-10T10:28:00-05:002022-12-02T12:34:43-05:00Looking Back at the Rome Call for AI Ethics: A Global University Summit<p>Co-organized by the Pontifical Academy for Life, IBM, and the University of Notre Dame and hosted by the Notre Dame-IBM Technology Ethics Lab, the <a href="https://techethicslab.nd.edu/news-and-events/rome-call-for-ai-ethics-a-global-university-summit/">Rome Call for AI Ethics: A Global University Summit</a> was convened in…</p><p>Co-organized by the Pontifical Academy for Life, IBM, and the University of Notre Dame and hosted by the Notre Dame-IBM Technology Ethics Lab, the <a href="https://techethicslab.nd.edu/news-and-events/rome-call-for-ai-ethics-a-global-university-summit/">Rome Call for AI Ethics: A Global University Summit</a> was convened in Notre Dame’s McKenna Hall October 26–27.</p>
<p>The <a href="https://www.romecall.org/" target="_blank">Rome Call for AI Ethics</a> is a commitment around ethics, rights, and education, aiming to promote an ethical approach to the design, development, and deployment of AI. It seeks to advance a sense of shared responsibility among international organizations, governments, institutions, and the private sector to create a future in which digital innovation and technological progress are centered around humanity.</p>
<p>Published in February 2020, the Rome Call was originally signed by the Pontifical Academy for Life, Microsoft, IBM, the Food and Agriculture Organization of the United Nations (FAO), and the Italian Ministry of Innovation.</p>
<p>Representatives of <a href="#map">36 universities</a> attended the summit to learn more about the Rome Call, discuss how its principles might be put into practice in the context of higher education, and explore opportunities for collaboration.</p>
<figure class="image-right"><img alt="logo for Rome Call for AI Ethics Global University Summit" height="288" src="https://techethicslab.nd.edu/assets/488633/romecallforaiethics_logo_fullcolor_rgb_800px_72ppi.jpg" width="600"></figure>
<p>The summit featured four keynote addresses:</p>
<ul>
<li>“Who is Responsible for Responsible AI?” – Pascale Fung, Hong Kong University of Science and Technology</li>
<li>
<a href="https://techethicslab.nd.edu/assets/496191/slides_casey_fiesler_final.pdf">“AI Ethics for All: A Broader Perspective on AI Education” (8mb PDF)</a> – Casey Fiesler, University of Colorado Boulder</li>
<li>
<a href="https://techethicslab.nd.edu/assets/496190/slides_alpesh_shah_final.pdf">“Trustworthy & Responsible AI Considerations: Principles | Policies | Practice” (4 mb PDF)</a> – Alpesh Shah, IEEE</li>
<li>“Ethics-Based Auditing of AI: What It Is and Why It Matters” – Luciano Floridi, University of Oxford</li>
</ul>
<p>Themes that emerged from the keynotes and the panel discussions that followed them included the importance of advancing interdisciplinary approaches to AI ethics, both in research and the education of undergraduate and graduate students, as well as the need to develop strategies to better reach industry and communities.</p>
<p>Participants in the summit had the chance to brainstorm potential ways the group could pursue these goals. Ideas ranged from creating a common, shared catalogue of courses/modules and jointly advocating with accrediting bodies to include AI ethics in required curricula to pooling resources among collaborating universities and increasing incentives for faculty to undertake AI ethics research. It was proposed that the universities could provide some of these incentives themselves while also partnering to encourage funding agencies to recognize the importance of this type of work.</p>
<figure class="image-right"> </figure>
<p>Among the 36 universities represented, eight attended the summit for the additional purpose of formally signing the Rome Call:</p>
<ul>
<li>Catholic University of Croatia</li>
<li>Chuo University (Japan)</li>
<li>Schiller International University</li>
<li>SWPS University of Social Sciences and Humanities (Poland)</li>
<li>University of Florida</li>
<li>University of Johannesburg</li>
<li>University of Navarra (Spain)</li>
<li>University of Notre Dame</li>
</ul>
<p>The signing ceremony concluded the day-and-a-half event and began with remarks from Dario Gil, IBM Senior Vice President and Director of Research, and Archbishop Vincenzo Paglia, President of the Pontifical Academy for Life, followed by comments from campus leaders at each of the eight universities.</p>
<p>More information about the Rome Call for AI Ethics is available at <a href="https://www.romecall.org/" target="_blank">romecall.org</a>.</p>
<figure class="image-default"><a id="map" name="map"></a><img alt="a map of the world showing the location of universities signing the Rome Call for AI Ethics (denoted with an S) or participating in the summit (denoted with a P)" height="1080" src="https://techethicslab.nd.edu/assets/492118/fullsize/rome_call_summit_map.png" width="1920"></figure>
<p> </p>
<p><em>To see the full list of attending universities, go to <a href="https://techethicslab.nd.edu/romecall">techethicslab.nd.edu/romecall</a>.</em></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1487122022-10-20T15:51:00-04:002022-10-20T15:59:20-04:00Notre Dame to sign Rome Call for AI Ethics, host Global University Summit<p><span style="position:relative"><span style="top:0.5pt">The University of Notre Dame will formally sign the <a href="https://www.romecall.org/" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">Rome Call for AI Ethics</span></a> on Thursday (Oct. 27), together with the University of Navarra in Spain, Catholic University of Croatia, SWPS University in Poland, Schiller International University in Spain, Chuo University in Japan, University of Johannesburg and University of Florida.</span></span></p><p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt">The University of Notre Dame will formally sign the <a href="https://www.romecall.org/" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">Rome Call for AI Ethics</span></a> on Thursday (Oct. 27), together with the University of Navarra in Spain, Catholic University of Croatia, SWPS University in Poland, Schiller International University in Spain, Chuo University in Japan, University of Johannesburg and University of Florida. </span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt"></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt">The signing will coincide with a <a href="https://techethicslab.nd.edu/news-and-events/rome-call-for-ai-ethics-a-global-university-summit/" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">Global University Summit</span></a> <span style="background:white">on the Rome Call, held on Oct. 26-27</span> and co-organized by <span style="background:white">the Pontifical Academy for Life, </span><a href="https://www.ibm.com/artificial-intelligence/ethics" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">IBM</span></a><span style="background:white"> and Notre Dame</span>.<span class="MsoHyperlink" style="position:relative"><span style="top:0.5pt"><span style="vertical-align:baseline"><span style="background:white"><span style="text-underline:none"> Hosted </span></span></span></span></span><span style="background:white">by the </span><a href="https://techethicslab.nd.edu/news-and-events/rome-call-for-ai-ethics-a-global-university-summit/" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">Notre Dame-IBM Technology Ethics Lab</span></a><span class="MsoHyperlink" style="position:relative"><span style="top:0.5pt"><span style="vertical-align:baseline"><span style="background:white"><span style="text-underline:none"> and held in-person and virtually, the summit will </span></span></span></span></span><span style="background:white">explore ways in which universities can use the complementary roles of research, education and policy in the development of human-centered approaches to artificial intelligence (AI). </span></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"><span style="background:white"></span></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt">“It’s an honor for the University of Notre Dame to host this global summit and support efforts to promote an ethical approach to artificial intelligence,” said <a href="https://www.nd.edu/about/leadership/council/john-t-mcgreevy/" style="position:relative; top:0.5pt; vertical-align:baseline">John T. McGreevy</a>, the University’s Charles and Jill Fischer Provost. “Notre Dame has long recognized the importance of incorporating responsibility and accountability into our teaching and research. As the world’s technological capabilities increase in areas such as AI, we will continue to identify new ways in which we can advance knowledge in service to humankind.”<span style="background:white"></span></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt">Msgr. Vincenzo Paglia, president of the Pontifical Academy for Life, said, “Education is the key process that enables people, especially the fragile (young and old), not to be subjected to the innovative process but to be able to be participatory actors in it. This initiative is the moment to translate these theoretical value instances into academic practices that can produce adequate guidance and social transformation.”</span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt"></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt"><span style="background:white">Darío Gil, senior vice president and director of research at IBM, said, “At IBM, we believe that creating and deploying cutting-edge technologies like AI will transform how we live and work and that this future must be developed responsibly and ethically. As one of the first signers of the Rome Call for AI Ethics, IBM is proud to continue its collaboration with the Pontifical Academy for Life and other like-minded institutions across industry, academia, government and society to ensure we are collectively building a future that is supportive and inclusive of every single person.”</span></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt"><span style="background:#fff2cc"></span></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt">Along with a formal signing ceremony of the new partners, the two-day gathering will include <a href="https://techethicslab.nd.edu/news-and-events/rome-call-for-ai-ethics-a-global-university-summit/" style="position:relative; top:0.5pt; vertical-align:baseline">keynote speakers</a>, roundtables and networking events </span></span><span style="position:relative"><span style="top:0.5pt">to define collaborative strategies universities can take around the Rome Call for AI Ethics. </span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt">The event will be emceed by Francesca Rossi, IBM fellow and AI ethics global leader, together with Rev. Robert A. Dowd, C.S.C., vice president and associate provost for interdisciplinary initiatives at Notre Dame, and Erin Klawitter, associate director of the Notre Dame-IBM Technology Ethics Lab. Archbishop Paglia and IBM’s Gil will provide remarks at the signing ceremony.</span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"><span style="background:white">The Rome Call for AI Ethics – established in 2020 and originally signed by the Pontifical Academy for Life, IBM, Microsoft, FAO and the Italian Ministry of Innovation – is a commitment to promote an ethical approach to the design, development and deployment of AI. It seeks to advance a sense of shared responsibility among international organizations, governments, institutions and the private sector to create a future in which digital innovation and technological progress are focused on humanity. </span></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt">A complete summit agenda is available <a href="https://techethicslab.nd.edu/news-and-events/rome-call-for-ai-ethics-a-global-university-summit/" style="position:relative; top:0.5pt; vertical-align:baseline">here</a>.<span style="background:white"> A full list of participating organizations is available </span><a href="https://www.romecall.org/organisations/" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">here</span></a><span style="background:white">.</span><span style="background:white"></span></span></span></p>
<p style="text-indent:0in"><span style="position:relative"><span style="top:0.5pt"><span style="background:white"></span></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt"><span style="background:white">Following the summit, the network of participating universities will collaborate regularly to share updates, discuss innovative ideas and democratize AI ethics solutions.</span></span></span></p>
<p style="text-indent:-.15pt"><span style="position:relative"><span style="top:0.5pt"><span style="background:white"></span></span></span></p>
<p style="text-indent:-.15pt"><em><span style="position:relative"><span style="top:0.5pt"><span style="background:white"><strong>Contact:</strong> Shannon Roddel, </span><a href="mailto:Chapla.1@nd.edu" style="position:relative; top:0.5pt; vertical-align:baseline"><span style="background:white">Chapla.1@nd.edu</span></a><span style="background:white"></span></span></span></em></p>
<p class="attribution"><em>Originally published by <span class="rel-author">Shannon Roddel and Carrie Gates</span> at <span class="rel-source"><a href="https://news.nd.edu/news/notre-dame-to-sign-rome-call-for-ai-ethics-host-global-university-summit/">news.nd.edu</a></span> on <span class="rel-pubdate">October 20, 2022</span>.</em></p>Shannon Roddel and Carrie Gatestag:techethicslab.nd.edu,2005:News/1475222022-08-30T12:41:00-04:002022-09-29T09:47:35-04:00Lab Seeking “Auditing AI” Proposals as Part of Annual CFP<p><em><strong>*Note: The application period for the 2022-2023 CFP has closed. Check back later this fall for award announcements.</strong></em></p> <p>Each year, the Notre Dame-IBM Tech Ethics Lab releases a Call for Proposals (CFP) for the purpose of funding practical and applied interdisciplinary…</p><p><em><strong>*Note: The application period for the 2022-2023 CFP has closed. Check back later this fall for award announcements.</strong></em></p>
<p>Each year, the Notre Dame-IBM Tech Ethics Lab releases a Call for Proposals (CFP) for the purpose of funding practical and applied interdisciplinary research in tech ethics.</p>
<p>The focus of the 2022-23 CFP is “Auditing AI.” Potential areas for research and scholarship include, but are not limited to, the following:</p>
<ul>
<li>Scope of AI audits</li>
<li>Regulatory frameworks for AI audits</li>
<li>Methodologies for AI audits</li>
<li>Skills for future AI auditors</li>
<li>Teaching methodologies for AI audits</li>
<li>How AI audits may impact various sectors and industries</li>
<li>Suggested best practices for AI audits</li>
<li>Adoption and deployment of AI audits</li>
</ul>
<p>Successful applications will propose a defined deliverable (such as, but not limited to, research papers, draft policy, model legislation, teaching materials, and impact assessments) that address the above challenges to be completed between January 1, 2023, and December 31, 2023. Applicants may apply for up to $60,000 USD in funding.</p>
<p>Information on how to apply is available at <a href="https://techethicslab.nd.edu/call-for-proposals/">techethicslab.nd.edu/call-for-proposals</a>.</p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1459612022-06-02T12:00:00-04:002022-09-20T13:01:21-04:00Dr. Shannon Vallor Explains ‘Virtue Ethics and Technomoral Futures’ on Lab Podcast<p>A philosopher whose research explores the ethics of emerging science and technologies, Dr. Shannon Vallor is particularly well known for her work in virtue ethics and was the guest for the fourth episode of the Notre Dame-IBM Technology Ethics Lab podcast Tech on Earth.</p><p>A philosopher whose research explores the ethics of emerging science and technologies, Dr. Shannon Vallor is particularly well known for her work in virtue ethics. She describes this as a tradition “rooted in the notion of character, and the way in which our actions and our habits shape our moral character,” a principle that resonates across a number of cultures.</p>
<p>“So I was really interested in this as a way to think about technology precisely because what technologies do when they are new is they transform our habits, they transform the things that we do every day,” Dr. Vallor said on the fourth episode of the Notre Dame-IBM Tech Ethics Lab podcast <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">Tech on Earth</a>. “And virtue ethics says that it's precisely the things that you do every day that determine the shape of your character and your ability to live well with others.”</p>
<p>Dr. Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, where she directs the Centre for Technomoral Futures. Her use of the term “technomoral” is a way to emphasize, counter to the thesis of technological neutrality, that technology cannot in fact be separated from our values, and vice versa.</p>
<p>“This habit that we have of treating technology and morality as entirely independent, separate areas of study or interest is actually part of the problem of why our society is struggling right now to align innovation and economic growth and scientific progress with social and political flourishing. … So I talk about technomoral virtues, I talk about technomoral futures, because I want to remind us that until we begin to understand the integration of technology with our values and the mutually dependent relationship of these domains, we won't be able to solve the problems that are facing us today.”</p>
<p>Host Elizabeth Renieris also asked Dr. Vallor about the idea of moral debt—accrued when society puts off dealing with moral problems that it’s deemed tolerable at the moment—as it relates to artificial intelligence.</p>
<p>“AI systems are being used as Band-Aids or sort of easy technical fixes for big social problems, like the problems involved in distributing public benefits in a fair and equitable way, or the economic challenges of keeping competitive with the global economy,” Dr. Vallor said. “A lot of businesses, a lot of governments are rushing to use AI to save time and save costs, but they're often implementing AI in ways that are not robust, not particularly safe, that tend to amplify social injustices and inequalities. And those costs are going to come due; those costs always come due.”</p>
<p>Dr. Vallor noted that paying attention to ethics at the design stage is one important strategy for limiting this debt but that appropriate regulation, corporate responsibility in deployment, and systems that study how technologies are actually working out in the world are equally critical to restoring tech to its proper place.</p>
<p>“Not something that takes over from us,” she said, “not something that replaces us, but something that makes us better and makes us able to live better with one another. So I still very much believe in the power of technology as one way in which humans have always lived well. But we have to be able to understand the role that we play and the responsibilities that we have for ensuring that that alignment of technology and human flourishing is stable.”</p>
<p>A member of the faculty at the University of Edinburgh since 2020, Dr. Vallor was previously Regis and Dianne McKenna Professor of Philosophy at Santa Clara University. She studies how human character is being transformed by rapid advances in artificial intelligence, robotics, new social media, surveillance, and biomedical technologies.</p>
<p>Dr. Vallor is the author of the book <em>Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting</em> and the editor of <em>The Oxford Handbook of Philosophy of Technology</em>, both published by Oxford University Press. Currently the chair of the Scottish government’s Data Delivery Group, she was the winner of the 2015 World Technology Award in Ethics from the World Technology Network.</p>
<p>Tech on Earth is a podcast aimed at bringing a practical lens to tech ethics around the globe. You can listen to the episode with Dr. Vallor by using the player below, visiting the podcast’s homepage at <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">techethicslab.nd.edu/news-and-events/tech-on-earth</a> (includes a written transcript), or finding Tech on Earth in your favorite podcast app.</p>
<div id="buzzsprout-player-10646486"> </div><script src="https://www.buzzsprout.com/1940484/10646486-virtue-ethics-and-technomoral-futures.js?container_id=buzzsprout-player-10646486&player=small" type="text/javascript" charset="utf-8"></script>
<p><strong>Find the Podcast</strong></p>
<p style="font-size:x-large"><a href="https://podcasts.apple.com/us/podcast/tech-on-earth/id1611036939" target="_blank"><span class="icon" data-icon="apple-podcasts"></span></a> <a href="https://open.spotify.com/show/1mbZ047KpiZ2Mosuo9xCih" target="_blank"><span class="icon" data-icon="spotify"></span></a> <a href="https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS8xOTQwNDg0LnJzcw==" target="_blank"><span class="icon" data-icon="google-podcasts"></span></a> <a href="https://music.amazon.com/podcasts/8e522768-107d-4bee-baa3-484de17f9d03/tech-on-earth" target="_blank"><span class="icon" data-icon="amazon-music"></span></a> <a href="https://www.stitcher.com/show/tech-on-earth" target="_blank"><span class="icon" data-icon="stitcher"></span></a> <a href="https://www.podchaser.com/podcasts/tech-on-earth-4253711" target="_blank"><span class="icon" data-icon="podchaser"></span></a></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1455342022-05-12T17:00:00-04:002022-09-20T13:00:40-04:00Tech on Earth Features Dr. Amana Raquib on ‘Islam and Postmodern Technology’<p>When Dr. Amana Raquib began studying ethical issues associated with science and technology, she was struck by the need for scholarship informed by ethical traditions familiar to her as a Muslim.</p> <p>“Not just as something that require[d] a deconstruction because there [was] a lot of critique already…</p><p>When Dr. Amana Raquib began studying ethical issues associated with science and technology, she was struck by the need for scholarship informed by ethical traditions familiar to her as a Muslim.</p>
<p>“Not just as something that require[d] a deconstruction because there [was] a lot of critique already that was being produced within the Western … academic literature in the past few decades,” Dr. Raquib said on the third episode of the Notre Dame-IBM Tech Ethics Lab podcast <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">Tech on Earth</a>. “So I thought that there needs to be some constructive work that needs to be done through an Islamic … metaphysical, epistemological, and ethical perspective.”</p>
<p>An assistant professor of social sciences and liberal arts at the Institute of Business Administration Karachi in Karachi, Pakistan, Dr. Raquib is the author of the book <em>Islamic Ethics of Technology: An Objectives’ (Maqāṣid) Approach</em>.</p>
<p>She explained how the Maqāṣid is a paradigm consisting of several fundamental objectives that early Muslim theologians derived from the Islamic Scriptures, and that from an Islamic perspective, whatever human beings do or don’t do—with respect to technology or anything else—should honor these principles to secure the well-being of all humankind. She contrasted this against what she calls our era of postmodern technology, where innovation and new technologies are often pursued for their own sake, without being guided by any moral tradition.</p>
<p>“When we don't have any foundation to give us any sort of final values to aim to … what happens is that efficiency becomes the highest value or rather the norm, right?” Dr. Raquib said. “And it becomes enough to just say, you know, as a justification, or as a rationale behind anything, or any technology or technological application, that it saves up on time, it's more efficient, it saves up on labor, so on and so forth.”</p>
<p>In addition, she and host Elizabeth Renieiris discussed the relationship between the individual and the collective good in Islam.</p>
<p>“So also the paradigm that I've used, the objectives paradigm … it's very clear in that … if the individual good is conflicting with the collective good, then the collective good … is to be prioritized,” Dr. Raquib said.</p>
<p>She went on to tell Renieris that she doesn’t believe that tackling ethical issues with a specific technology is possible without a more holistic examination of the value frameworks in which tech is being developed.</p>
<p>“If somebody asked me, you know, We are designing this … one technology, can you come and help us with that one technology, I don't think it would work this way, right? Because one technology, again, is part of a greater nexus, you know, a web of technologies.”</p>
<p>Dr. Raquib, who holds a Ph.D. in religion, philosophy, and ethics from the University of Queensland, has been a faculty member at the Institute of Business Administration Karachi since 2015. Among other courses, she teaches “Are We Becoming Post-Human: Technology, Society, Ethics.”</p>
<p>She recently delivered two talks at the International Conference on Islamic Ethics and AI, including “Developing Human Beneficial AI Using Guidance from Islamic Maqasid.” In 2020, her project “Culturally Informed Pro-Social AI Regulation and Persuasion Framework” received a grant under the Facebook Research Ethics in AI Research Initiative for the Asia Pacific.</p>
<p>Tech on Earth is a podcast aimed at bringing a practical lens to tech ethics around the globe. You can listen to the episode with Dr. Raquib by using the player below, visiting the podcast’s homepage at <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">techethicslab.nd.edu/news-and-events/tech-on-earth</a> (includes a written transcript), or finding Tech on Earth in your favorite podcast app.</p>
<div id="buzzsprout-player-10516687"> </div><script src="https://www.buzzsprout.com/1940484/10516687-islam-and-postmodern-technology.js?container_id=buzzsprout-player-10516687&player=small" type="text/javascript" charset="utf-8"></script>
<p><strong>Find the Podcast</strong></p>
<p style="font-size:x-large"><a href="https://podcasts.apple.com/us/podcast/tech-on-earth/id1611036939" target="_blank"><span class="icon" data-icon="apple-podcasts"></span></a> <a href="https://open.spotify.com/show/1mbZ047KpiZ2Mosuo9xCih" target="_blank"><span class="icon" data-icon="spotify"></span></a> <a href="https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS8xOTQwNDg0LnJzcw==" target="_blank"><span class="icon" data-icon="google-podcasts"></span></a> <a href="https://music.amazon.com/podcasts/8e522768-107d-4bee-baa3-484de17f9d03/tech-on-earth" target="_blank"><span class="icon" data-icon="amazon-music"></span></a> <a href="https://www.stitcher.com/show/tech-on-earth" target="_blank"><span class="icon" data-icon="stitcher"></span></a> <a href="https://www.podchaser.com/podcasts/tech-on-earth-4253711" target="_blank"><span class="icon" data-icon="podchaser"></span></a></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1448712022-04-19T08:00:00-04:002022-09-20T13:02:01-04:00Father Paolo Benanti Explores Rome Call for AI Ethics on Tech on Earth<p>Sponsored by the Pontifical Academy for Life (PAV), the <a href="https://www.romecall.org/" target="_blank">Rome Call for AI Ethics</a> was signed in Rome on February 28, 2020, by the PAV, IBM, Microsoft, the UN’s Food and Agriculture Organization, and the Italian government’s Ministry of Innovation.…</p><p>Sponsored by the Pontifical Academy for Life (PAV), the <a href="https://www.romecall.org/" target="_blank">Rome Call for AI Ethics</a> was signed in Rome on February 28, 2020, by the PAV, IBM, Microsoft, the UN’s Food and Agriculture Organization, and the Italian government’s Ministry of Innovation.</p>
<p>To hear Father Paolo Benanti, a member of the PAV, describe the Rome Call is to gain an appreciation for both the simplicity of its purpose and the loftiness of its goals.</p>
<p>“If you go in Silicon Valley, you can hear from a lot of programmers that being in Silicon Valley today, it's like to be in Florence during the Renaissance, there is a lot of idea of something new that they are producing,” Fr. Benanti said on the second episode of the Notre Dame-IBM Tech Ethics Lab podcast <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">Tech on Earth</a>.</p>
<p>“Try to write ‘Renaissance’ with capital AI in the middle. So if the Renaissance was the time when we discovered again the centrality of the human beings, RenAIssance in AI means to start to develop an AI system that is human-centered, that has the human beings as the core and as the ends. … [The] Rome Call [has] tried to be the blueprint of this new RenAIssance.”</p>
<p>A Franciscan of the Third Order Regular, Fr. Benanti is Extraordinary Professor of Moral Theology, Bioethics, Neuroethics, and Ethics of Technologies at the Pontifical Gregorian University in Rome. He spoke with Tech on Earth host Elizabeth Renieris, the Lab’s founding director, about the Rome Call’s promotion of what he and others (including Pope Francis) have termed “algorethics.”</p>
<p>“If a machine can give you or deny to you to borrow money from a bank, if a machine can give or deny you some kind of constitutional right—like in the trial in the tribunal—well, this machine has not only to execute a code; it's also to understand human-produced ethical value,” Fr. Benanti said. “But these ethical values, this moral law, now has to be computable in an algorithmical way [so] as to be … understandable by the machine. Algorethics is this new chapter of this old journey of the human beings on the Earth that is traced in ethics.”</p>
<p>Fr. Benanti started his academic career in mechanical engineering before entering the Franciscan order and pursuing theology and philosophy. He holds a doctorate in moral theology from the Pontifical Gregorian University and won the university’s Vedovato Award for his dissertation “The Cyborg: Corpo e corporeità nell’epoca del postumano.”</p>
<p>His research focuses on the management of innovation, particularly as it relates to the internet and the impact of the Digital Age, biotechnologies for human improvement and biosecurity, and neuroscience and neurotechnology. Among his many publications is the ebook <em>Homo Faber: The Techno-Human Condition</em> (EDB 2018).</p>
<p>For Fr. Benanti, the study of technology and tech ethics is a natural extension of his faith.</p>
<p>“As a Christian Catholic, believing that we are creature[s] made by God, that mean[s] that everything that we are is in the desire of God,” he said. “So our reason, our ability to understand, our ability to project and do things are not just an accident; it's something that [is] given us to take care and to allow the land, the promised land, to [become] fruitful. So it's something that we can use to produce much more wellness for everyone.”</p>
<p>Tech on Earth is a podcast aimed at bringing a practical lens to tech ethics around the globe. You can listen to the episode with Fr. Benanti by using the player below, visiting the podcast’s homepage at <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">techethicslab.nd.edu/news-and-events/tech-on-earth</a> (includes a written transcript), or finding Tech on Earth in your favorite podcast app.</p>
<div id="buzzsprout-player-10391617"> </div><script src="https://www.buzzsprout.com/1940484/10391617-a-renaissance-the-rome-call-for-ai-ethics.js?container_id=buzzsprout-player-10391617&player=small" type="text/javascript" charset="utf-8"></script>
<p><strong>Subscribe to the Podcast</strong></p>
<p style="font-size:x-large"><a href="https://podcasts.apple.com/us/podcast/tech-on-earth/id1611036939" target="_blank"><span class="icon" data-icon="apple-podcasts"></span></a> <a href="https://open.spotify.com/show/1mbZ047KpiZ2Mosuo9xCih" target="_blank"><span class="icon" data-icon="spotify"></span></a> <a href="https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS8xOTQwNDg0LnJzcw==" target="_blank"><span class="icon" data-icon="google-podcasts"></span></a> <a href="https://music.amazon.com/podcasts/8e522768-107d-4bee-baa3-484de17f9d03/tech-on-earth" target="_blank"><span class="icon" data-icon="amazon-music"></span></a> <a href="https://www.stitcher.com/show/tech-on-earth" target="_blank"><span class="icon" data-icon="stitcher"></span></a> <a href="https://www.podchaser.com/podcasts/tech-on-earth-4253711" target="_blank"><span class="icon" data-icon="podchaser"></span></a></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1443862022-03-29T09:00:00-04:002022-09-20T13:02:42-04:00Podcast Discusses Buddhist Ethics With the Venerable Tenzin Priyadarshi<p>“I think what Buddhism is constantly reminding us is, uncertainty is reality.”</p> <p>Those are the words of the Venerable Tenzin Priyadarshi, founding president and CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. He was the guest for…</p><p>“I think what Buddhism is constantly reminding us is, uncertainty is reality.”</p>
<p>Those are the words of the Venerable Tenzin Priyadarshi, founding president and CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. He was the guest for the first episode of <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">Tech on Earth</a>, the Notre Dame-IBM Tech Ethics Lab podcast aimed at bringing a practical lens to tech ethics around the globe.</p>
<p>A Buddhist monk, Venerable Tenzin described Buddhist ethics as “not just a normative approach” but also as didactic and reflexive.</p>
<p>“So the idea is not that, let's just abide by certain rules and regulations that [are] created by a certain group of people, but sort of an ongoing, healthy conversation about what ethical imagination is … reminding us of the fact of the complexity of the world that we live in, that not everything that is legal may be ethical.”</p>
<p>In the course of his conversation with host Elizabeth Renieris, the Lab’s founding director, Venerable Tenzin offered a primer on Buddhist ethics generally and what they mean in the context of technology specifically. This included a discussion of our increasingly virtual lives during COVID as well as a principle he calls “ethics by design.”</p>
<p>“One of the key issues is that if we ramp up things so fast, the negative cost of it on our society is perhaps so expansive that it's difficult to get back, it's difficult to ramp back, meaning you cannot really undo certain kinds of deployments,” Venerable Tenzin said. “And so part of my push was that why can't we have the conversation around ethical framing at the design stage? Meaning rather than just having engineers in the room or marketing psychologists in the room, why not also have certain kinds of individuals who can at least inform us creatively as to what could go wrong?”</p>
<p>The episode covered how Venerable Tenzin would apply Buddhist ethical framing to case studies involving technology in the automotive, healthcare, and defense industries and the role of educational institutions in preparing students to be ethical leaders.</p>
<p>“We have to recognize, as educators, that learning ethics is not magic,” he said. “You know, learning ethics is not genetics, so to speak, that you will have certain individuals who would wake up one day and become ethical all of a sudden. And it is the responsibility of education institutions to pay ... attention to ethical learning as much as we are paying attention to business leadership and tech leadership and designing products, either for consumer orientation or for [the] military and so on.”</p>
<p>In addition to his current role leading The Dalai Lama Center for Ethics and Transformative Values, Venerable Tenzin has served as director of the ethics initiative at the MIT Media Lab and as a fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University. He is also the founding director and president of the Prajnopaya Institute and Foundation, a worldwide humanitarian organization which provides care for all regardless of ethnicity, religion, or gender by developing innovative health, education, and social welfare programs.</p>
<p>He entered a Buddhist monastery at the age of 10, studying traditional Indo-Tibetan and Japanese Buddhism. He was ordained by His Holiness The Dalai Lama, who is his spiritual mentor.</p>
<p>You can listen to the episode with Venerable Tenzin by using the player below, visiting the podcast’s homepage at <a href="https://techethicslab.nd.edu/news-and-events/tech-on-earth/">techethicslab.nd.edu/news-and-events/tech-on-earth</a> (includes a written transcript), or finding Tech on Earth in your favorite podcast app.</p>
<div id="buzzsprout-player-10254292"> </div><script src="https://www.buzzsprout.com/1940484/10254292-a-buddhist-lens-on-tech-ethics.js?container_id=buzzsprout-player-10254292&player=small" type="text/javascript" charset="utf-8"></script>
<p><strong>Subscribe to the Podcast</strong></p>
<p style="font-size:x-large"><a href="https://podcasts.apple.com/us/podcast/tech-on-earth/id1611036939" target="_blank"><span class="icon" data-icon="apple-podcasts"></span></a> <a href="https://open.spotify.com/show/1mbZ047KpiZ2Mosuo9xCih" target="_blank"><span class="icon" data-icon="spotify"></span></a> <a href="https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS8xOTQwNDg0LnJzcw==" target="_blank"><span class="icon" data-icon="google-podcasts"></span></a> <a href="https://music.amazon.com/podcasts/8e522768-107d-4bee-baa3-484de17f9d03/tech-on-earth" target="_blank"><span class="icon" data-icon="amazon-music"></span></a> <a href="https://www.stitcher.com/show/tech-on-earth" target="_blank"><span class="icon" data-icon="stitcher"></span></a> <a href="https://www.podchaser.com/podcasts/tech-on-earth-4253711" target="_blank"><span class="icon" data-icon="podchaser"></span></a></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1428292022-01-28T07:30:00-05:002022-05-10T09:43:09-04:00Notre Dame-IBM Tech Ethics Lab Announces Projects Recommended for CFP Funding<p>On Friday, Jan. 28, the Notre Dame-IBM Tech Ethics Lab announced more than two dozen projects recommended for funding totaling over $500,000. The projects were submitted in response to the lab’s initial <a href="https://techethicslab.nd.edu/call-for-proposals/">Call for Proposals (CFP)</a>.</p> <p>The call sought proposals focused…</p><p>On Friday, Jan. 28, the Notre Dame-IBM Tech Ethics Lab announced more than two dozen projects recommended for funding totaling over $500,000. The projects were submitted in response to the lab’s initial <a href="https://techethicslab.nd.edu/call-for-proposals/">Call for Proposals (CFP)</a>.</p>
<p>The call sought proposals focused on at least one of six core themes related to the ethics of: scale, automation, identification, prediction, persuasion, and adoption. More than 100 proposals were received, representing every continent but Antarctica, with North America, Africa, and Europe leading the way.</p>
<p>“We are humbled by the robust response to our inaugural Call for Proposals from applicants from every corner of the globe,” said Elizabeth Renieris, the lab’s founding director. “We are happy to provide funding to the awarded projects, confident they will provide valuable insights into these critical themes in technology ethics, and look forward to sharing those learnings with the lab’s wider community.”</p>
<p>“New technologies are unlocking insights and innovative solutions with potential to solve some of society’s biggest challenges, but ethics and responsibility must remain at the heart of how these technologies are built and deployed," said Betsy Greytok, IBM Vice President, Ethics & Policy, and co-director of the Notre Dame-IBM Tech Ethics Lab. "We look forward to the work these projects will deliver and are confident they will advance global thinking and best practices for how organizations can maintain public trust in these new solutions.”</p>
<p>Projects will be undertaken and completed this year, and final deliverables will be accessible through the lab’s website.</p>
<p>The titles of the proposals recommended for funding are included below along with the names of the awardees and brief descriptions of the planned deliverables.</p>
<h4>Artificial Justice</h4>
<p><strong>Awardees: Halsey Burgund</strong> (MIT Open Documentary Lab), <strong>Sarah Newman</strong> (Harvard University), <strong>Jessica Silbey</strong> (Boston University)</p>
<p><em>Deliverables: Interactive website and short article that imagine what it would be like if U.S. Supreme Court decisions were handed down by natural language processing models</em></p>
<h4>Assessing Africa’s Policy Readiness Towards Responsible Artificial Intelligence</h4>
<p><strong>Awardee: Erick Otieno</strong> (Reallink Ltd.)</p>
<p><em>Deliverables: Desktop analysis, interviews, micro-ethnography, stakeholder workshops, and policy brief exploring understanding of responsible AI among the African population</em></p>
<h4>An Audit for Children-Nudging: Games and Social Media</h4>
<p><strong>Awardees: Marianna Ganapini </strong>(Union College), <strong>Enrico Panai</strong> (ForHumanity)</p>
<p><em>Deliverable: Audit framework for evaluating the ethical use of nudging AI technologies in gaming and social media aimed at children, including best practices and risk-mitigation strategies</em></p>
<h4>Comparative Analysis of Risks and Benefits of Digital Identification Systems in DRC, Gabon, Cameroon and Republic of Congo</h4>
<p><strong>Awardees: Divine Enkando </strong>(Data Rights Lab), <strong>Narcisse Mbunzama</strong> (Digital Security Group)</p>
<p><em>Deliverables: Report and workshops on the risks and benefits of digital ID systems used in four Central African countries, specifically focused on activists, journalists, and NGO workers in a high-risk environment</em></p>
<h4>The Complete Picture Project</h4>
<p><strong>Awardees: Devangana Khokhar</strong> (Outsight International), <strong>Dan McClure </strong>(Outsight International), <strong>Denise Soesilo</strong> (Outsight International)</p>
<p><em>Deliverables: Open-source test dataset and roadmap that can be used to detect algorithmic gender biases</em></p>
<h4>Developing Model Legislation for the Operationalization of Information Fiduciaries for AI Governance</h4>
<p><strong>Awardees: Josh Lee</strong> (ETPL.Asia), <strong>Lenon Ong</strong> (ETPL.Asia), <strong>Elizaveta Shesterneva</strong> (ETPL.Asia)</p>
<p><em>Deliverables: Roundtable sessions, model legislation, and a policy paper identifying scope for information fiduciaries in the context of AI</em></p>
<h4>Diagnosis and Mitigation of Bias from Latin America Towards the Construction of Tools and a Framework for Latin American Ethics in AI</h4>
<p><strong>Awardees: Luciana Benotti</strong> (Universidad Nacional de Córdoba), <strong>Beatriz Busaniche</strong> (Universidad de Buenos Aires), <strong>María Lucía Gonzalez Dominguez</strong> (Universidad Nacional de Córdoba)</p>
<p><em>Deliverables: Techniques, survey, and best practices for detecting, preventing, and mitigating biases in Spanish-language natural language processing models</em></p>
<h4>Duty of Data Loyalty Model Legislation</h4>
<p><strong>Awardees: Woodrow Hartzog</strong> (Northeastern University), <strong>G.S. Hans</strong> (Vanderbilt University), <strong>Neil Richards</strong> (Washington University in St. Louis)</p>
<p><em>Deliverable: Model U.S. federal and state legislation that would impose a duty of data loyalty on companies holding human information</em></p>
<h4>Ethical Issues Associated With Pervasive Eye-Tracking </h4>
<p><strong>Awardees: Shaun Foster</strong> (Rochester Institute of Technology), <strong>Evan Selinger</strong> (Rochester Institute of Technology)</p>
<p><em>Deliverables: Virtual reality (VR) scenarios for use with eye-tracking hardware and a public service video raising awareness of the dangers of gaze-tracking in VR environments</em></p>
<h4>The Ethical Radicals</h4>
<p><strong>Awardee: Freyja Van den Boom</strong> (Bournemouth University)</p>
<p><em>Deliverables: Tool and ethical guidelines for adopting and monitoring fair automated decision-making tools for the insurance industry</em></p>
<h4>Ethics Experiment on Designing Character for AI</h4>
<p><strong>Awardees: Charles Ikem</strong> (PolicyLab Africa), <strong>Sudha Jamthe</strong> (Stanford University)</p>
<p><em>Deliverables: White paper and webinar focused on a framework for building AI with characteristics, or “personality traits,” of trust, fairness, and transparency </em></p>
<h4>Examining Dark Patterns in Apps Used by Adolescents</h4>
<p><strong>Awardee: Sundaraparipurnan Narayanan </strong>(Independent Researcher)</p>
<p><em>Deliverables: Workshop and white paper examining the impact of digital nudging in mobile apps used by adolescents</em></p>
<h4>Explainable and Auditable AI in the Nexus of Climate Change and Food Security</h4>
<p><strong>Awardees: Winston Ojenge</strong> (African Centre for Technology Studies),<strong> Catherine Kilelu</strong> (African Centre for Technology Studies), <strong>Joel Onyango</strong> (African Centre for Technology Studies)</p>
<p><em>Deliverables: Framework and policy brief for the use of ML/AI tools to monitor how climate change influences crop yields in Kenya, including recommendations for governing data</em></p>
<h4>Exploring Local Post-Hoc Explanation Methods in Tax-Related AI Systems</h4>
<p><strong>Awardees: Marco Almada</strong> (European University Institute), <strong>Błażej Kuźniacki</strong> (University of Amsterdam), <strong>Kamil Tylinski</strong> (Mishcon de Reya LLP)</p>
<p><em>Deliverable: White paper focused on designing transparent AI systems to help taxpayers understand AI tax-administration decisions, such as risk profiling and determinations of fraud</em></p>
<h4>A Framework for Identification, Review, and Resolution of Ethical Issues in Healthcare Machine Learning Projects</h4>
<p><strong>Awardees: </strong><strong>Moses Thiga</strong> (Kabarak University), <strong>Pamela Kimeto</strong> (Kabarak University), <strong>Jeremiah Fadugba</strong> (University of Ibadan)</p>
<p><em>Deliverables: Workshops and guidelines for machine learning practitioners working on healthcare ML solutions to encourage consideration of bioethics concerns</em></p>
<h4>From Ethical Models to Good Systems: A Data Labeling Service for AI Ethics</h4>
<p><strong>Awardees: Andrew Brozek</strong> (Craftinity), <strong>Thomas Gilbert</strong> (Cornell Tech), <strong>Megan Welle</strong> (Daios)</p>
<p><em>Deliverable: White paper outlining a service for monitoring how training data translates to AI model outputs and enabling recognition and correction of errors</em></p>
<h4>Human-Beneficial Decision-Making by Means of Augmented Reality Serious Gaming</h4>
<p><strong>Awardee: Ida Romana Helena Rust</strong> (University of Twente)</p>
<p><em>Deliverable: White paper or product design principles exploring how serious gaming elements in artificial reality can help to distinguish independently made decisions from those nudged or imposed by smart technologies</em></p>
<h4>Identifying Common Typologies of Harm in Forecasting Systems</h4>
<p><strong>Awardees: Nathaniel Raymond</strong> (Yale University), <strong>Bahman Rostami-Tabar</strong> (Cardiff University)</p>
<p><em>Deliverable: White paper on potential harms of forecasting technologies and recommendations to address them</em></p>
<h4>Increasing Venture Capital Investment in Ethical Tech</h4>
<p><strong>Awardees: Ravit Dotan</strong> (University of Pittsburgh), <strong>Leehe Skuler</strong> (Global Impact Tech Alliance – GITA)</p>
<p><em>Deliverables: A framework for VC stakeholders seeking to incorporate ethical AI criteria in investment strategies, accelerating the adoption of ethical AI standards across the tech industry</em></p>
<h4>InterpretMe 2.0: A Web Tool for Community-Centered Interpretation of Social Media Posts</h4>
<p><strong>Awardees: Siva Mathiyazhagan</strong> (Columbia University), <strong>Desmond Patton</strong> (Columbia University)</p>
<p><em>Deliverables: Web-based tool, article, and design principles for promoting a community-centered response to possible biases against and harms to vulnerable communities on social media</em></p>
<h4>A Manual of Ethical UX Design Principles</h4>
<p><strong>Awardees: Shyam Krishnakumar</strong> (Pranava Institute), <strong>Titiksha Vashist</strong> (Pranava Institute)</p>
<p><em>Deliverable: Ethical framework for identifying design choices that promote dark patterns and best practices to avoid them </em></p>
<h4>Promoting Human Values in the Design, Development, and Policies of Brain-Machine Interfaces</h4>
<p><strong>Awardees: Margot Hanley</strong> (Cornell Tech), <strong>Karen Levy </strong>(Cornell University), <strong>Guy Wilson</strong> (Stanford University)</p>
<p><em>Deliverables: White paper and industry/government/civil society roundtables on policy recommendations for brain-machine interfaces</em></p>
<h4>A Responsible Development Biometric Deployment Handbook</h4>
<p><strong>Awardees: James Eaton-Lee</strong> (Simprints), <strong>Alexandra Grigore</strong> (Simprints), <strong>Stephen Taylor</strong> (Simprints)</p>
<p><em>Deliverables: Handbook, presentations, and webinar for responsibly using biometric technologies in international development contexts</em></p>
<h4>Reversing the Mirror: Toward Ethical, Community-Centric Biometric Governance</h4>
<p><strong>Awardees: Shankar Narayan</strong> (Independent Researcher), <strong>Nandini Ranganathan</strong> (CETI, Portland State University), <strong>Hanson Hosein</strong> (HRH Media Group LLC)</p>
<p><em>Deliverables: Convening, toolkit, literature review, and white paper connecting community members potentially impacted by biometric surveillance technologies with regulators, lawmakers, and technology producers to create a community-centered agenda for biometric technologies</em></p>
<h4>Solving Ethical Challenges in the Design of Open-Source Environments: Scaling Urban Mapping Models in View of the Locus Charter</h4>
<p><strong>Awardees: Lorraine Oliveira</strong> (Independent Researcher),<strong> Julio Pedrassoli</strong> (MapBiomas Project), <strong>Monika Kuffer </strong>(University of Twente)</p>
<p><em>Deliverables: Open-source script and metadata on addressing data misuse and bias when geospatially mapping deprived areas in low-to-middle-income urban environments</em></p>
<h4>What Really Works? A Study of the Effectiveness of AI Ethical Risk-Mitigation Initiatives</h4>
<p><strong>Awardees: Ali Hasan</strong> (BABL AI), <strong>Ben Lange</strong> (BABL AI), <strong>Shea Brown</strong> (BABL AI)</p>
<p><em>Deliverables: Dashboard and best practices report focusing on the effectiveness of various AI ethical risk-mitigation initiatives</em></p>Tech Ethics Labtag:techethicslab.nd.edu,2005:News/1423882021-12-21T16:00:00-05:002021-12-21T16:32:56-05:002021 at the Lab<p>The Notre Dame-IBM Technology Ethics Lab was founded with a mission to promote broad-based, far-reaching interdisciplinary research, thought, and policy leadership in artificial intelligence and other technology ethics. It does so by engaging with relevant stakeholders to examine real-world challenges…</p><p>The Notre Dame-IBM Technology Ethics Lab was founded with a mission to promote broad-based, far-reaching interdisciplinary research, thought, and policy leadership in artificial intelligence and other technology ethics. It does so by engaging with relevant stakeholders to examine real-world challenges and provide practical models and applied solutions.</p>
<p>This mission played out in a number of ways in 2021, our first full year of activity. Starting with a foundational leadership hire and continuing through a call for research proposals that will culminate in award announcements in early 2022, the lab began to develop its identity in the tech ethics space. </p>
<p>Here’s a look back at these and other highlights.</p>
<h3>Appointment of the Lab’s Founding Director</h3>
<p>In January, Elizabeth M. Renieris, a technology and human rights fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government and a practitioner fellow at Stanford University’s Digital Civil Society Lab, was appointed the lab’s founding director.</p>
<p>An internationally recognized expert in law and policy whose work and research focus on data governance and the human rights implications of advanced and emerging technologies, Renieris is a leading authority on digital identity, cross-border data protection and privacy laws, and technologies such as blockchain and AI.</p>
<p><a href="https://news.nd.edu/news/elizabeth-m-renieris-appointed-founding-director-of-the-notre-dame-ibm-tech-ethics-lab/" target="_blank"><strong>Read the News Release</strong></a></p>
<h3>The Launch of TEC Talks</h3>
<figure class="image-right"><img alt="TEC Talks logo" height="300" src="https://techethicslab.nd.edu/assets/443581/tectalk_600x300_resized.jpg" width="600"></figure>
<p>TEC Talks, a virtual speaker series created in partnership with the Notre Dame Technology Ethics Center, got started last spring with seven sessions on the theme of “Misinformation and Disinformation.” This fall, seven more conversations examined different aspects of “Technology and Power.”</p>
<p>Many thanks to our friends at ThinkND for distributing these first two TEC Talks series through their website, where you can find videos of the events. Audio of the sessions is also available through the ThinkND podcast.</p>
<p><strong><a href="https://think.nd.edu/bq/tec-1/" target="_blank">Series 1: Misinformation and Disinformation</a></strong></p>
<p><strong><a href="https://think.nd.edu/bq/tec-2/" target="_blank">Series 2: Technology and Power</a></strong></p>
<h3>Inaugural Event: “Ethics in Action” Panel</h3>
<p>According to the UN Secretary General's Roadmap on Digital Cooperation, there are more than 160 distinct organizational, national, and international sets of AI ethics and governance principles worldwide, and even more related to technology ethics more generally.</p>
<p>This April panel convened a group of experts from across academia, industry, standards-setting bodies, the public sector, and civil society to share their perspectives on translating these frameworks, principles, and guidelines into action.</p>
<p><a href="https://techethicslab.nd.edu/news/blog-ethics-in-action-panel/" target="_blank"><strong>Read the Recap</strong></a></p>
<p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/OYlVrQp_x3A" title="YouTube video player" width="560"></iframe></p>
<h3>Renieris Testimony for the U.S. House of Representatives Financial Services Committee</h3>
<p>On Friday, July 16, Renieris testified before the U.S. House of Representatives Financial Services Committee’s Task Force on Artificial Intelligence in a virtual hearing titled “I Am Who I Say I Am: Verifying Identity While Preserving Privacy in the Digital Age.”</p>
<p>The hearing concerned the future of digital identity frameworks in the United States; the development of secure, reliable, and interoperable digital identity solutions that minimize fraud and identity theft while respecting individual privacy and security; and the proposed Improving Digital Identity Act of 2020, a bipartisan bill seeking to set government-wide policy to modernize U.S. digital identity infrastructure.</p>
<p><a href="https://techethicslab.nd.edu/news/blog-house-financial-services-committee-testimony/" target="_blank"><strong>Read the Story</strong></a></p>
<h3>Call for Proposals</h3>
<p>This fall, the lab released an inaugural call for proposals with the aim of funding practical and applied interdisciplinary projects focused on six core themes: </p>
<ul>
<li>
<strong>Scale: </strong>Projects that address the limits of networked technologies; the risks of large data models; frameworks for mitigating systemic risks; or methods for scaling safely and responsibly</li>
<li>
<strong>Automation: </strong>Projects that address how we preserve autonomy in the face of automation; the risks of automated processing/algorithmic decision-making; or how to revive and apply the right to the freedom of thought to digital technologies</li>
<li>
<strong>Identification: </strong>Projects that address how to design ethical digital ID schemes; the ethics of reputational or scoring systems; ethical frameworks for the use of biometrics; or the ethics of immunity certificates/passports</li>
<li>
<strong>Prediction:</strong> Projects that address the ethical limits of prediction; ethical frameworks for the use of predictive technologies; policy guidance for accountability and recourse with respect to predictions and predictive technologies</li>
<li>
<strong>Persuasion:</strong> Projects that examine when it’s acceptable or unacceptable to nudge or persuade; the line between persuasion and manipulation; how to design ethical frameworks for neurotechnologies; or the role of design and defaults to avoid dark patterns and the like</li>
<li>
<strong>Adoption: </strong>Projects that address how to design ethical frameworks for procurement; establish guardrails for public-private collaborations; develop governance models and oversight</li>
</ul>
<p>The call for proposals generated more than 100 submissions from every continent but Antarctica, with North America (46), Africa (28), and Europe (19) leading the way. Awards will be announced in late January 2022.</p>Tech Ethics Lab