ChatGPT and generative AI tools like it have captured the public’s attention in recent months, driving ongoing debate about their potential to help or harm individuals and society.
Less often discussed, though, is the ethical use of foundation models, which underlie many generative AI applications in enterprises.
On June 1, the Notre Dame-IBM Technology Ethics Lab hosted a virtual Symposium on the Ethical Use of Foundation Models in Enterprises, covering issues such as how organizations have already been using these powerful technologies, what ways they might be deployed in the future, and how we ensure this type of AI actually serves to make our lives better.
Drawing on a variety of perspectives from academia, civil society, and industry, the symposium was divided into two parts, each initiated with a keynote address and followed by a panel discussion.
The first half of the event focused on the fundamentals of foundation models in order to better understand them and how they’re used in enterprise contexts.
- Arvind Karunakaran of Stanford University delivered a keynote titled “Foundation Models in the Workplace: Implications for Organizations and Governance” (2:10 in the video) highlighting some of the ways foundation models were being deployed in business prior to all the headlines surrounding tools such as ChatGPT. His talk drew on his own research involving a law firm that used a “lawbot” to automate certain tasks.
- IBM’s Saishruthi Swaminathan then moderated a panel with Alex Engler of The Brookings Institution and Manish Goyal of IBM Consulting (26:08). They shared examples illustrating how foundation models may be used in specific applications of generative AI and both the uses and current limitations of these rapidly advancing tools, with Goyal noting he’d never seen a level of innovation like we’ve experienced over the last six months or so.
Engler also began to touch on how regulators might govern the use of foundation models, leading directly into the second part of the symposium, where the primary topic was the ethical challenges they raise, particularly as they are adopted in enterprise settings.
- In her keynote “Generative AI’s Ethical Debt” (1:12:37), Casey Fiesler of the University of Colorado Boulder encouraged attendees to view thoughtful and informed critique of technology as an obligation. She called for more diversity among not only those who are developing tech but also those who are analyzing its consequences. Fiesler, who will be a visiting fellow at the lab beginning July 1, also emphasized that there are plenty of challenges from AI to address right now without waiting for a science fiction-like crisis (e.g., super intelligence) with which to contend.
- Cody Turner, a tech ethics postdoctoral fellow at Notre Dame, moderated the day’s second panel, which featured Pin-Yu Chen of IBM Research and Triveni Gandhi of Dataiku (1:36:42). They discussed topics including the necessity of defining human values before you can develop responsible AI, the importance of the entire AI life cycle for creating tools that do what we want them to, and the problem with treating interactions with artificial intelligence systems like we’re interacting with a human.