2021-2022 CFP

In fall 2021, the Notre Dame-IBM Tech Ethics Lab released its inaugural Call for Proposals (CFP) for the purpose of funding practical and applied interdisciplinary projects focused on at least one of six core themes related to the ethics of: scale, automation, identification, prediction, persuasion, and adoption. 

Information about the 27 projects selected for funding is included below. Links to final deliverables—e.g., model legislation, code, a white paper, product design principles, visualizations, mixed media, etc.—are added to this page after the projects have been completed.

Funded Proposals

Scale
Automation
Identification
Prediction
Persuasion
Adoption

Scale

The Complete Picture Project

Awardees: Devangana Khokhar (Outsight International), Louis Potter (Outsight International), Denise Soesilo (Outsight International)

Final Deliverable – Project Report

The Complete Picture Project (CPP) addresses hidden and pervasive AI and Machine learning algorithmic biases by constructing complete test datasets that better represent the true diversity of human societies and communities.

The global data landscape disproportionately represents already empowered individuals and demographics. The reasons for this are varied but it means that AI and machine learning (ML) models are also biased when trained on incomplete data sets, which in turn amplify any biases that may already exist. This can have severe impacts on the lives of millions of people who are already subjected to under-representation and discrimination.

According to Forbes, the global AI-driven machine learning market will reach $20.83B in 2024. Low- and middle-income countries have already seen a rapid expansion in applications using this technology. The humanitarian and development sectors increasingly make use of machine learning models to reach beneficiaries faster, understanding needs better and make key decisions about the form and execution of life-saving programs. How do developers and users ensure that Artificial Intelligence (AI) algorithms serve all the members of a community equitably and fairly?

The Solution that CPP proposes is to use certified balanced datasets that can be applied to develop, test and validate inclusive AI models and to identify potential bias. CPP builds independent, broadly diverse, representative (of individuals and of contexts) datasets that can be applied to a variety of algorithms to address biases in all phases of the development process: 1) early design, 2) testing and 3) adaptation and adoption

a drawing of a skyline next to the text The Ethics of Scale

InterpretMe 2.0: A Web Tool for Community-Centered Interpretation of Social Media Posts

Awardees: Siva Mathiyazhagan (Columbia University), Desmond Patton (Columbia University)

Social media posts can be incredibly difficult to understand because they can be highly contextualized and hyper-local in nature and misunderstandings have real consequences. The social media monitoring and interpretation processes play a role among many other systemically racially biased factors in compromising the lives and livelihoods of community members. InterpretMe aims to humanize social media content and help stakeholders learn about restorative alternatives to punitive action. InterpretMe 2.0 will be an e-guide for the prevention of misinterpretation and promote a holistic approach that enables stakeholders to explore people beyond social media posts. This holistic community-centered approach will enable law enforcement, the judicial system, and other reporting professionals to reduce punitive criminal justice responses associated with social media surveillance and prevent the use of social media for mass incarceration. 

a drawing of a skyline next to the text The Ethics of Scale

Solving Ethical Challenges in the Design of Open-Source Environments: Scaling Urban Mapping Models in View of the Locus Charter

Awardees: Monika Kuffer (University of Twente), Lorraine Oliveira (Independent Researcher), Julio Pedrassoli (MapBiomas Project)

Final Deliverable on GitHub

Deliverables: Deprived areas in Low- to Middle-Income Countries (LMICs) are a big urban challenge that requires consistent and updated information about their living conditions. However, due to the high costs of very-high-resolution (VHR) imagery and their computational constraints, little research addresses the characterization of such areas and even fewer city-wide analysis. Considering these challenges, Oliveira developed an innovative unsupervised machine learning model to spatialize and capture intra-urban deprivation in São Paulo using solely open data sources. The results pointed out four types of deprived areas in the city and acknowledged the differences among them. Through the award, the project goal is to expand Oliveira’s research to a regional scale—the Metropolitan Region of São Paulo—and improve the level of automation of the model. In view of the Locus Charter principles, the team aim at delivering a workflow that incorporates the needs of LMIC policymakers and that facilitates the model comprehension and its application. Considering the inequalities exacerbated by the COVID-19 pandemic, the project has high societal relevance since it does not require field surveying and rapidly generates useful information for decision-makers. During the following months, the team will access, collect and process open-source datasets, including the spatial features developed with stakeholders. Then, they will develop, optimize, and assess the algorithm, dealing with possible transparency, safety, and responsibility issues.

a drawing of a skyline next to the text The Ethics of Scale

Automation

Artificial Justice

Awardees: Halsey Burgund (MIT Open Documentary Lab), Sarah Newman (Harvard University), Jessica Silbey (Boston University)

What would it look like if the legal decisions made by the US Supreme Court were “handed down” not by a group of nine experienced justices, but rather by an algorithm? What could go wrong? And how is this any less arbitrary or “just” than the current US Supreme Courts interpretations of the Constitution?

We know that humans are biased, and that machines built by humans and trained on human-collected data are biased, too. And yet, predictive systems are increasingly prevalent. We’re now seeing advances in natural language processing and generation that even a few years ago seemed like science fiction. Using a language model trained on Supreme Court decisions, Artificial Justice explores the possibilities of such technological “advances.”

In a time when so many legal battles are both political and sharply divided, and the most highly trained legal minds in our country can vehemently disagree about the same set of facts, some may hope for a technological solution. But how could we trust that this technology would exhibit benevolence or fairness? Can we gain new insights into the shortcomings of our own legal system by exploring such questions through a technological lens? By investigating the current capabilities of AI-enhanced “decision-making,” what can we learn that might help us influence development in a prosocial way?

Algorithms already control much of what we consume online and they manipulate us in ways that are both ethically dubious and often inscrutable. From social media companies to politicians to foreign actors, the instances of AI-enabled influence-peddling are nearly endless. Artificial Justice will take shape as a creative multimedia work that shines a light on these everyday manipulations by applying them to the most significant legal decisions made in the US, including those from the past, the present, and even the future.

a drawing of gears next to the text The Ethics of Automation

Developing Model Legislation for the Operationalization of Information Fiduciaries for AI Governance

Awardees: Josh Lee (ETPL.Asia), Lenon Ong (ETPL.Asia), Elizaveta Shesterneva (ETPL.Asia)

The idea of technology companies as information fiduciaries has been well-received by academics, legislators and technology executives alike. Facebook’s Mark Zuckerberg, for instance, has publicly acknowledged the “idea of [Facebook] having a fiduciary relationship with the people who use our services” as “intuitive” and consistent with Facebook’s “own self-image."

While a promising legal doctrine that has seen traction in the US and other jurisdictions, what remains less clear is the scope of duties imposed on information fiduciaries in fostering responsible innovation, specifically in the field of AI. This is especially when the extensiveness of such duties developed in United States legal literature is constrained by First Amendment considerations which do not apply elsewhere. It is also less clear how the recognition of information fiduciaries as a legal doctrine should best come about – through private law or regulation, or an interaction of the two, and how it could interact with upcoming AI regulation such as the recently proposed EU AI Act.

Balancing the utility which use of AI brings to society, we aim to identify an appropriate level of scope and extensiveness for information fiduciary duties in the context of responsible AI, and propose model legislation that jurisdictions can refer to.

a drawing of gears next to the text The Ethics of Automation

The Ethical Radicals

Awardee: Freyja Van den Boom (Bournemouth University)

Final Deliverable – Project Report

Many industries are disrupted by the ongoing digitization and the insurance industry is no exception. With the help of big data and AI, insurers have been able to improve their risk assessment, claim management and provide consumers with more personalized insurance that meets their needs. Despite the benefits, the adoption of automated decision-making (ADM) processes also pose serious risks. Research shows the potential for insurers to discriminate unlawfully based on people's willingness to pay and/or unintentionally. The increased risk for proxy discrimination is a good example where a seemingly objective factor such as postcode or the color of a person's car turns out to be a proxy for otherwise protected characteristics such as race or gender.

There is still much uncertainty about what is ethical and what is lawful when it comes to ADM which may stifle the uptake of otherwise beneficial innovations.

The ethical radicals project therefore aims to test the boundaries what is legal and ethical and help clarify the grey areas for insurers to decide about their use of ADM.

As a contribution to help insurers ensure their use of algorithms remains lawful and ethical, this project will prototype a tool for insurers for internal assessment of their algorithms and design a high-level framework with issues to consider when deciding upon the development/adoption and use of automated decision making to ensure insurance practices remain lawful, ethical and in the best interest of consumers. The project consists of stakeholder interviews, literature analysis and workshops. If you are interested to contribute do not hesitate to contact the PI of the project. We look forward to presenting the results this summer.

a drawing of gears next to the text The Ethics of Automation

Ethics Experiment on Designing Character for AI

Awardees: Charles Ikem (PolicyLab Africa), Sudha Jamthe (Stanford University)

Final Deliverable – Working Paper

Today AI is designed without any character basis with random behaviors and personality and no transparency on how it would behave in certain situations. When AI becomes an integral part of our lives and does useful functions like senior care or engage with people in serious situations, their behavior must be thoughtfully designed and grounded onto clear values defined as their character.

We propose an ethics experiment to collect data to test the hypothesis that adding character with AI ethical tenets will ensure transparency and trust of AI with users. 

This will involve testing the personality traits of AI of an existing AI to check AI’s gender, humanizing features such as tone and inclusiveness in its engagement on AI confusion matrix for false positives. We’ll collect empirical data to show that the character tenets that drive personality traits are more trustworthy and fairer to users.

Our project will aim to 1) validate/invalidate our framework to engage the UX designer throughout the AI lifecycle to ensure that AI has character development, and that the AI remains ethical. 2) Establish evidence and theory that the current process of character development for AI is limited in that it does not ensure that AI is designed with ethical principles and the personality of the AI is biased with random genderizing and humanizing of the AI with data gathered without respect for privacy of users. The output will be a reference AIX framework for designers to use freely to build ethical AI.  

a drawing of gears next to the text The Ethics of Automation

Exploring Local Post-Hoc Explanation Methods in Tax-Related AI Systems

Awardees: Marco Almada (European University Institute), Błażej Kuźniacki (University of Amsterdam), Kamil Tylinski (Mishcon de Reya LLP)

Final Deliverable – Conference Paper

Final Deliverable – Journal Article

Final Deliverable – Conference

The project aims to answer the question of how to design AI systems in tax law which are capable of helping taxpayers understand the decisions of tax organs and thus avoid litigations such as the Dutch SyRI case. We want to achieve it by preparing prototypically developed explanation solutions for AI systems (XAI) that is used by tax administration for detection of tax fraud, risk profiling and auditing (selecting tax inspections). The XAI solution will be designed for the taxpayers, who, as subjects of decisions rendered fully or partially by AI system, are primarily concerned with “why” questions. Thus system’s behaviour must be interpreted in order to provide certainty with regards to relevant factors contributing to a particular outcome. For these stakeholders local post-hoc explanation methods that embody counterfactuals appear most suitable. We will present these XAI solutions to randomly chosen group of taxpayers for evaluation purposes via questionnaires. The most comprehensible XAI solution will be evaluated by them as the best in terms of their use in a fair and transparent way, thereby contributing to the use of responsible AI systems by tax administration.

The project aims to prompt the data science and tax law communities to strive together for designing ideal AI system for tax related tasks, i.e., the one that combines high explanatory capability with low knowledge-engineering effort and yet being highly accurate.

a drawing of gears next to the text The Ethics of Automation

From Ethical Models to Good Systems: A Data Labeling Service for AI Ethics

Awardees: Andrew Brozek (Craftinity), Thomas Gilbert (Cornell Tech), Megan Welle (Daios)

Final Deliverable – White Paper

Technical approaches to AI ethics presently focus on encoding abstract ethical values directly into the models being trained. Less attention has been paid to context-specific norms and risks. For example, a self-driving car fleet that recognizes pedestrians and cars but not potholes would do enormous damage to roads even if the model is perfectly safe in an abstract sense. At present, we are unable to track the relationship between training data and the resulting behavior of a deployed AI system.  We are developing a system that automatically monitors how training data will be relied on by the system to conduct particular activities. This will permit 1) real-time monitoring of model outputs; 2) recognition and correction of unethical AI system behavior; 3) feedback between 1) and 2) so that context-specific norms are sustained over time.

Our intended deliverable is a whitepaper that frames the prototype’s technical contributions as implementable, marketable, and significant for AI ethics. We intend this document to present a case study of automated vehicles and the types of harm present in it, with particular focus on visualizing route navigation for automated vehicle driving behaviors. It will address the feedback between the trained model and salient types of harm: How does the choice of features impact forms of model bias? How often should the model be retrained? How costly is data collection and (re)training? What level of performance is desirable for particular tasks?

a drawing of gears next to the text The Ethics of Automation

Promoting Human Values in the Design, Development, and Policies of Brain-Machine Interfaces

Awardees: Margot Hanley (Cornell Tech), Helen Nissenbaum (Cornell Tech), Meg Young (Cornell Tech)

Brain machine interfaces (BMIs) are used to treat a range of cognitive and sensory motor conditions, including brain injuries and paralysis. With continued advances in neuroscience and machine learning, we are likely to see BMI capabilities and applications grow, both in their technical capabilities and in their reach. While BMIs provide important benefits, they also pose a set of pressing ethical challenges. At some point they will likely be able to access our intimate inner lives: our mental processes, decision making, and emotions—terrain within humans as yet inaccessible. With this access, however, comes fraught questions around autonomy, agency, and accountability.

This project seeks to support the policymakers who will need to respond to the coming wave of ethical concerns presented by BMIs. The research will consist of two strands of work: a literature review and semi-structured interviews. Our literature review will examine definitions of different BMIs for a policy audience, inform vignettes on how BMIs may mediate human experience, and synthesize work-to-date on threats that BMIs pose to human agency and autonomy across application domains. We will also conduct interviews with a broad range of stakeholders, including neuroscientists, technologists, ethicists, and policymakers. From these two strands of work, we will develop a white paper that identifies and defines the relevant technologies, highlights the domains where such technologies are likely to be applied, surfaces the key ethical challenges, and recommends policy considerations and possible interventions. 

a drawing of gears next to the text The Ethics of Automation

Identification

Comparative Analysis of Risks and Benefits of Digital Identification Systems in DRC, Gabon, Cameroon and Republic of Congo

Awardees: Divine Enkando (Data Rights Lab), Narcisse Mbunzama (Digital Security Group)

This study aims to conduct a comparative analysis of the risks and benefits of the different digital identification systems used in the DRC, Gabon, Cameroon and the Republic of Congo with an overview of the different technologies used, data rights, privacy, security and the laws in each of these countries.

Indeed, with the development of new digital identification technology and recent progress in artificial intelligence such as facial recognition tools, it has become almost possible to manage thousands of personal information in record time and to identify in a precise manner people just by having access to their data such as fingerprints, photos, videos, voices, etc.

Although these systems make positive contributions such as in the fight against terrorism, the identification of criminals, and in the management of the population, etc., in some authoritarian and non-democratic countries, these systems can be abused to stop dissenting voices, to identify opponents, to track activists and human rights defenders, etc.

a drawing of an ID card next to the text The Ethics of Identification

Ethical Issues Associated With Pervasive Eye-Tracking 

Awardees: Shaun Foster (Rochester Institute of Technology), Evan Selinger (Rochester Institute of Technology)

Final Deliverable – Video

Our project, “Ethical Issues Associated With Pervasive Eye Tracking,” aims to raise public awareness of the dangers eye tracking poses in virtual reality, especially when Big Tech companies like Meta are heavily involved with designing and servicing the “metaverse.” We’ll create a video for the Notre Dame-IBM Ethics Lab to host for public dissemination that features the two grant principal investigators, Evan Selinger and Shaun Foster, having their eyes tracked in virtual reality as they explore different scenarios. The video will include narration that explains some of the assumptions companies might make if they possessed this eye tracking data. Since the footage will feature the principal investigators, the project avoids the privacy concerns that can arise when relying on volunteers. Furthermore, since the video will popularize assumptions made in the eye tracking literature, it will not advance eye tracking studies and doesn’t involve the ethical issues that pertain to human subjects. We’ll build the virtual reality application featured in the video using a customized virtual reality headset that’s configured to perform eye tracking functions.

We can’t commit in advance to specific VR scenarios. Potential ones include the following:

  • A picture viewing room: the user’s gaze is tracked to discuss assumptions about disclosing intimate information.
  • A shop: while users consider making purchases, their eyes are tracked to discuss assumptions about interest and intent.
  • Quiz questions: while users consider questions that vary in degrees of difficulty, their eye movements are tracked to discuss assumptions about task performance. 
a drawing of an ID card next to the text The Ethics of Identification

A Responsible Development Biometric Deployment Handbook

Awardees: James Eaton-Lee (Simprints), Alexandra Grigore (Simprints), Stephen Taylor (Simprints)

Final Deliverable – Handbook

Biometrics are increasingly used in digital development contexts for enhancing effectiveness and efficiency. While there are some policies, tools, and frameworks specific to biometric technology, and many outstanding broad tools on Data Responsibility, many of these resources are either high level or generalistor extremely academic. There is relatively little translational material aimed at biometrics specifically.

We would like to produce a handbook including a tool for assessing suitability and “right fit,” picking the right technology, understanding technical pre-requisites, assessing privacy impact, monitoring safety and effectiveness, providing a set of tools for generalist tech4dev or development practitioners to safely and effectively assess whether biometrics are a useful tool for their projects, ask the right questions to consider how to deploy them safely, roll out the right policies and procedures for governed, safe use, and incorporate them into their projects with the right level of oversight for ethical use.

As part of this piece of work we will engage a small cohort of INGOs and form a steering group for input and reviewensuring our work is well-matched with their needs.

a drawing of an ID card next to the text The Ethics of Identification

Prediction

Explainable and Auditable AI in the Nexus of Climate Change and Food Security

Awardees: Catherine Kilelu (African Centre for Technology Studies), Winston Ojenge (African Centre for Technology Studies), Joel Onyango (African Centre for Technology Studies)

We propose a project that illuminates the core theme of ethics in machine learning-based PREDICTION. Since AI is fast-gaining ground in the continent, we intend to address the ethical limits of prediction; ethical frameworks for the use of predictive technologies; policy guidance for accountability and recourse with respect to predictions and predictive technologies.

We shall piggy-back the proposed study on our current studies at ACTS; studies that collect crop management data and yield, including weather variations, from small holder farmers and use machine learning to monitor how evidence-based climate change influence food yields within select staple-grain-growing areas of Kenya.

We shall combine desktop study with evidence-based experimentation to establish:

  1. Knowledge of how such data is governed;
  2. The most common data and algorithmic biases, and the errors which are due to such biases, in an African context
  3. How existing tools perform in measuring the biases;
  4. A map of which machine learning algorithms record least errors for which predictive scenarios;
  5. A framework based on the above information, and a policy brief proposal.
a drawing of a crystal ball next to the text The Ethics of Prediction

Identifying Common Typologies of Harm in Forecasting Systems

Awardees: Nathaniel Raymond (Yale University), Bahman Rostami-Tabar (Cardiff University)

Forecasting plays a critical role in guiding decisions and developing business strategies in many organisations. Despite a considerable body of research and practice in the area of forecasting, the focus has largely been on the potential benefits of using forecasting and how indispensable its methods are; however, less (or arguably very little) has been contributed (both in research & practice) on potential harm caused by forecasting. The forecast design to forecast implementation life cycle is not yet generally agreed and described in the literature, but we hypothesize that, regardless of sector or decision, the life cycle is routinized predictably stable across forecast types and applications. The aim of this project is to investigate the typologies of harm and the mechanisms by which they may occur in the forecasting process, which can be generalized, identified, and modified when the life cycle is commonly described. The project produces a catalogue of where forecasting may cause harm and provides recommendations to address issues of potential harm in the forecasting process.

a drawing of a crystal ball next to the text The Ethics of Prediction

Persuasion

An Audit for Children-Nudging: Games and Social Media

Awardees: Marianna Ganapini (Union College), Enrico Panai (ForHumanity)

Final Deliverable – Audit Framework

Deliverable: Audit framework for evaluating the ethical use of nudging AI technologies in gaming and social media aimed at children, including best practices and risk-mitigation strategies

a drawing of a magnet next to the text The Ethics of Persuasion

Examining Dark Patterns in Apps Used by Adolescents

Awardee: Sundaraparipurnan Narayanan (Independent Researcher)

Final Deliverable – White Paper

Setting the Context: A nudge is a function of any attempt at influencing people's judgment, choice, or behaviour in a predictable way.  Nudges tap into cognitive heuristics (“System 1” mechanism as referred to by Kahneman) to execute the influence.

While the ethics of nudging have been examined from the perspective of autonomy, rational choice and transparency, the impact on children (specifically adolescents) are not widely examined, more so in case of Dark Patterns.

Dark patterns: Deceptive User interface (UI) /User experience (UX) interactions including nudges that are non-transparent, constraining or limiting choices and/ or not in the best interest of the users. Essentially nudges that provide preferential weightage to one of the choices, choices are hidden from the user, inducing a false sense of urgency, hides information and limits user choices are negative nudges or dark patterns

Motivation: There are two key reasons for considering the adolescent age group for our study on nudges and/ or dark patterns. They are:

  1. Adolescents are in their normative stages of decision-making developments and could be prone to influences by nudges.
  2. Adolescents do not have an age-appropriate app category. App platforms have apps classified as children’s apps for age groups less than 12.

Project proposition: The research intends to focus on examining the dark patterns in popular apps (Android and IOS) in categories including Education, Gaming, Communication, and Social and Dating used by adolescents.

a drawing of a magnet next to the text The Ethics of Persuasion

Human-Beneficial Decision-Making by Means of Augmented Reality Serious Gaming

Awardee: Ida Romana Helena Rust (University of Twente)

Final Deliverable – White Paper

I will explore how adding serious game (SG) elements to augmented reality (AR) can be used to stimulate self-awareness in our relation with smart (AI-infused) technologies with a human-machine interface.

The disappearing boundaries between the human and smart technologies may elicit that people will not be able to distinguish between choices that come from ‘the heart’ and choices that are sublimely imposed by these smart technologies.

I hypothesize that in order to make human-beneficial decisions while being part of a smart technological environment, one must first become aware of one’s relation with the environment; then develop an experiential understanding of one’s self, to consequently consciously decide how to act human-beneficially.

SG is an effective means to reach the cognition by means of the affects. The decision to combine SG with AR is based on the increasing popularity in various (professional) settings of the latter. 

The methodology of this research is initially philosophical in nature, where I will use (post) phenomenological theory, applied to examples of SG in AR, to understand how the engagement with these games alters the experience of the user. I will provide a theoretical understanding of self-awareness in a smart technological environment and the experiential alterations within the individual as the result of one’s engagement with AR and SG. In the second half, I aim at the formulation of rudimentary guidelines on how ARSG can be developed to increase self-awareness and enable human-beneficial decision making.

a drawing of a magnet next to the text The Ethics of Persuasion

A Manual of Ethical UX Design Principles

Awardees: Shyam Krishnakumar (Pranava Institute), Titiksha Vashist (Pranava Institute)

Final Deliverable – Practitioner’s Manual

Modern UX principles are limited in the sense that they can influence as well as predict user’s behaviour while on a digital platform. However, they do not take into account the user’s overall interface with the physical world or even the user’s mental, physical and emotional wellbeing in the digital realm. For example, many social media platforms are designed with Aza Raskin’s “infinite” scroll’ as a core feature. This one feature has been widely adopted by social media, e commerce, and OTT platforms. While vastly “improving” user experience on the platform, it has been one of the major factors behind social media overuse and addiction, leading to loneliness and depression. Raskin himself apologised to the public in 2019 stating he “designed the service to create the most seamless experience possible for users, but did not foresee the consequences.” This, and many design choices (gamification for instance) warrant the need for new ethical principles which serve as fundamentals to keep in mind while creating new digital experiences. As we move forward in the 21st Century, we are blurring the lines between the physical and digital worlds. It is therefore pertinent to reinvent user experience to aid and improve life both online or offline.

This project aims to understand which design choices promote dark patterns, and may have long-term, multi-sided harms baked into them. At a conceptual level, the fundamental challenge is to find the ethical line between persuasion and dark patterns, given that its widespread application has fundamentally changed user behaviour in its favour. We attempt to engage in multidisciplinary research to create a manual of ethical UX design principles which keeps the human at the centre, and takes into account not just metrics like performance, but also behavioural, cognitive and emotional wellbeing. We seek to engage deeply with research in the fields of design, cognitive science, social theory and psychology; and finally bring together a community of designers to apply these principles in real-world use-cases.

a drawing of a magnet next to the text The Ethics of Persuasion

Adoption

Assessing Africa’s Policy Readiness Towards Responsible Artificial Intelligence

Awardee: Erick Otieno (Reallink Ltd.)

Final Deliverable – Policy Brief

Final Deliverable – Paper

Statistics show an estimated African population of 1,340,598,147 as demonstrated by (Worldometers, 2021). Out of this total estimate, the age bracket of 0-14 is estimated to be about two-fifths while the age bracket of 15-24 is estimated to be one-fifth of the total Africa population (Economic Commission for Africa, 2016). The data show a population that is important in terms of future Artificial Intelligence strategies especially when looking at the ethical Artificial Intelligence dimension. Sustainability is contextualized to mean the ability to have long-lasting residual Artificial Intelligence interventions that positively impact generations and generations to come. This is because the demography that will have the longest experience with Artificial Intelligence is the younger generation. It therefore follows that one of the questions emerging is whether Africa is ready in terms of policy infrastructure to dive into the world of Artificial Intelligence for the benefit of its population. With this in mind, understanding the policy intervention ecosystem would be an important undertaking even as the Artificial Intelligence interventions continue to be a platform to offer solutions to the African continent. There is a need for evidence that informs policy development and deployment strategies towards the successful development of responsible Artificial Intelligence that is regulated and is adaptable by the intended recipients. Consequently, this research will be an important contributor to the already existing conversation on responsible Artificial Intelligence both within Africa and beyond. This research will adopt exploratory research in attempting to address the research question.

a drawing of a handshake next to the text The Ethics of Adoption

Diagnosis and Mitigation of Bias from Latin America Towards the Construction of Tools and a Framework for Latin American Ethics in AI

Awardees: Luciana Benotti (Universidad Nacional de Córdoba), Beatriz Busaniche (Universidad de Buenos Aires), María Lucía Gonzalez Dominguez (Universidad Nacional de Córdoba)

Final Deliverable – Paper

The general goal of our project is to disponibilize, adapt and develop tools and frameworks for detecting, preventing and mitigating unwanted biases in Natural Language Processing applications. We will be focusing our work in word embeddings: a widely used but very opaque building block for many NLP models and applications. In parallel, we will develop a good practice guide based on Human Rights principles for local developers of natural language-based systems in Spanish. The tools developed in this project will help developers and non-technical stakeholders to evaluate, detect and mitigate unwanted biases in models and data, contributing to build a Latin American IA ethics.

We are an interdisciplinary team within Vía Libre Foundation with a background and long experience in social science and computer science  research. To this we add our strength in integrating academic work with public policy advocacy, an excellent relationship with civil society in Latin America and the possibility of giving increasing visibility to the work in our field. In addition, our team has a strong focus on gender diversity.

We want Latin Americans NLP practitioners to be able to use the existing tools and frameworks to build more fair, accountable and transparent systems. Nowadays there is no go-to off-the-shelf tool that assists practitioners in assessing the systems they create: we intend to change this. Success for this project is the integration of ethics and fairness principles in the software industry development lifecycle. 

a drawing of a handshake next to the text The Ethics of Adoption

Duty of Data Loyalty Model Legislation

Awardees: Woodrow Hartzog (Northeastern University), G.S. Hans (Vanderbilt University), Neil Richards (Washington University in St. Louis)

Current data privacy laws fail to stop companies from engaging in opportunistic, self-serving behavior at the expense of those who trust them with their data. A legal duty of loyalty would be a revolution in data privacy law, which is exactly what is needed to break the cycle of self-dealing that is ingrained into the current internet. Data collectors bound by this duty of loyalty would be obligated to act in the best interests of people who expose their data and online experiences, up to the extent of their exposure.

The team will draft model United States federal and state legislation that would impose a duty of data loyalty upon companies with respect to the human information they hold. The model legislation will prohibit information processors from designing digital tools and processing data in a way that conflicts with trusting parties’ best interests. It will also include setting rebuttable presumptions of disloyal activity, and a private right of action. The team will convene meetings in person and via videoconference with stakeholder groups to comment on the model legislation, to include academics, regulators, and members of the private sector. The team also will produce an explanatory white paper for legislators.

a drawing of a handshake next to the text The Ethics of Adoption

A Framework for Identification, Review, and Resolution of Ethical Issues in Healthcare Machine Learning Projects

Awardees: Jeremiah Fadugba (University of Ibadan), Pamela Kimeto (Kabarak University), Moses Thiga (Kabarak University)

The increase in the development of solutions for healthcare using Machine Learning (ML) continues to raise key ethical concerns in bioethics such as Beneficence, Non-maleficence, Autonomy and Justice. An additional concern is that of Explicability occasioned by the ‘black box’ nature of ML.

However, ML practitioners generally lack the capacity to identify and address these ethical issues in ML algorithm development, testing and deployment process. On the other hand, ethics review committee members drawn from the medical field also lack sufficient, if any, understanding of ML. They therefore lack sufficient capacity to identify and guide practitioners and researchers on ethical issues in ML.

This project therefore seeks to develop a framework for assessing and addressing ethical issues in Healthcare Machine Learning projects.

a drawing of a handshake next to the text The Ethics of Adoption

Increasing Venture Capital Investment in Ethical Tech

Awardees: Ravit Dotan (University of Pittsburgh), Leehe Skuler (Global Impact Tech Alliance – GITA)

Final Deliverable – Guidebook

The field of artificial intelligence (AI) is currently evolving faster than regulatory bodies can manage. Therefore, we look to venture capital as an influential stakeholder that can develop and enforce the ethical governance necessary to align the field with human-centric values.

To date, however, only a handful of VCs claim to consider tech ethics in their investment decisions and management. This is due, in part, to a lack of coordination over how to assess the ethical dimensions of AI ventures, as well as the absence of tools for external oversight. 

Our team will address this barrier by producing a practical framework for VC stakeholders seeking to incorporate ethical AI criteria in investment strategies, accelerating the adoption of ethical AI standards across the tech industry.

a drawing of a handshake next to the text The Ethics of Adoption

Reversing the Mirror: Toward Ethical, Community-Centric Biometric Governance

Awardees: Hanson Hosein (HRH Media Group LLC), Shankar Narayan (Independent Researcher), Nandini Ranganathan (CETI, Portland State University)

Discussions of biometrics rarely include, let alone take as a starting point, the perspectives of communities that have historically been impacted by surveillance technologies. Those communities often lack the capacity and fluency to discuss the full range and rapid adoption of biometric technologies, yet are likely to be heavily impacted by them.

To address these challenges, the Reversing the Mirror project will create a multi-modal convening that will demonstrate a new model of engagement, moving beyond community “input” to relationship-based collaborations that recognize and account for structural barriers, and appropriately incentivize and resource diverse participants. The convening will consist of two intentionally structured and executed parts—a first part in which a diverse set of impacted community leaders will build a community-centric approach and related policy proposals for biometric governance; and a second in which other decision-makers, including lawmakers, regulators, and technologists, engage with these community-centric policies and consider implementation pathways.

The project will result in increased capacity among all stakeholders to engage with one another in the context of biometric governance as well as on broader issues of ethics in the technological space; new substantive ideas and implementation pathways for biometric governance; documentation of relevant scholarship; and a toolkit of takeaways for future convenings, among other outcomes. Ultimately, we hope to demonstrate that careful attention to power structures in the tech space—and appropriate interventions in response—can truly change this important conversation, making it more inclusive and reflective of BIPOC and other impacted communities.

a drawing of a handshake next to the text The Ethics of Adoption

A Roadmap for Ethical AI Standardization

Awardee: Christine Galvagna (Technical University of Munich)

Final Deliverable – Discussion Paper

Deliverables: White paper and website helping policymakers and civil society incorporate interdisciplinary expertise into standards-setting for AI

a drawing of a handshake next to the text The Ethics of Adoption

What Really Works? A Study of the Effectiveness of AI Ethical Risk-Mitigation Initiatives

Awardees: Ali Hasan (BABL AI), Ben Lange (BABL AI), Shea Brown (BABL AI)

Final Deliverable – Project Report

We propose to conduct an empirical study of the effectiveness of various AI ethical risk mitigation initiatives. In almost all industries, from banking and HR, to health care and edtech, leadership is waking up to the fact that the AI they are using could cause harm, and that this could be a significant risk for their organization. As such, large organizations are implementing a wide range of policies, initiatives, and governance changes in an attempt to be proactive and use AI responsibly. However, as there is little evidence yet as to which interventions truly work, and little regulatory guidance, these initiatives are often best guesses rather than established best practices.

We hope to provide some insight into what is working and what is not, as well as preliminary explanations as to why these interventions fail or succeed. Through desk research, interviews, and industry surveys, our research will lead to a framework for the effectiveness of ethical risk mitigation initiatives in organizations. The framework will identify successful interventions and connect them to the main features of institutions and the socio-technical settings, providing a  list of initiatives that worked and why they did in a particular setting.

a drawing of a handshake next to the text The Ethics of Adoption