Call for Proposals

the words

Each year, the Lab releases a Call for Proposals (CFP) for the purpose of funding practical and applied interdisciplinary research in tech ethics.

The inaugural 2021-2022 CFP sought projects focused on at least one of six core themes related to the ethics of: scale, automation, identification, prediction, persuasion, and adoption. A total of 27 projects were recommended for funding.

The focus of the 2022-2023 CFP is “Auditing AI.” Information on how to apply for funding is below.

2022-2023 Call for Proposals

Artificial intelligence increasingly undergirds, digitizes, and automates important processes in our society and economy, such as processing loan applications, making medical diagnoses, informing hiring decisions, surfacing information, and piloting autonomous vehicles. Recent headlines reveal that with such important outcomes on the line for individuals, organizations, and the broader society, it is critical that AI can be trusted to make fair and accurate decisions.

Globally, legislation has been both proposed and passed to regulate AI; yet there is not yet broad consensus on the most effective ways to assess the implications of a given system prior to and during deployment. While the objectives that inform financial audits may translate to artificial intelligence in that a financial auditor gathers and inspects evidence to determine whether a company’s practices are free from material misstatement or fraud, an AI auditor may examine design documents, code, and training data to determine whether a company’s algorithms are free from material bias, inaccuracy, or other potentially consequential impact. Financial audits are just one example; other kinds of audits, such as IT, privacy, security, and operational audits, may provide models for examining the impact of decisions made by AI systems.

The predicament remains daunting, however. As AI becomes more sophisticated, auditing it will only involve increasingly complicated ethical, social, and regulatory challenges. Dimensions that require auditing must be identified, agreed upon, and measured. AI auditors must be trained. Policies must be developed to govern the operations, credentialing, and impact of audits.

We invite proposals for projects that grapple with these challenges and suggest innovative solutions. Potential areas for research and scholarship include, but are not limited to, the following:

  • Scope of AI audits
  • Regulatory frameworks for AI audits
  • Methodologies for AI audits
  • Skills for future AI auditors
  • Teaching methodologies for AI audits
  • How AI audits may impact various sectors and industries
  • Suggested best practices for AI audits
  • Adoption and deployment of AI audits

Successful applications will propose a defined deliverable (such as, but not limited to, research papers, draft policy, model legislation, teaching materials, and impact assessments) that address the above challenges to be completed between January 1, 2023 and December 31, 2023.

Key words: AI Ethics, Privacy, Controls, Risk assessment, Risk mitigation, Governance, Policy, Compliance, Evidence, Opinion, Security, Availability, Confidentiality, Integrity, Fairness, Bias, Accuracy, Trust, Impact assessment, Transparency

Download the application packet for more information about timelines, eligibility, project awards, and selection criteria.

Apply now