Virtual
Working Group

All day

Our campus is closed to the public for this event.

As artificial intelligence (AI) and machine learning (ML) models become capable of increasingly general and powerful inferences, ensuring that AI/ML is developed in a responsible manner has emerged as a pressing social concern. In recent years, research into the development of responsible AI has converged on the conclusion that such development can fail in cases where the complex causal dynamics that characterize social systems are not sufficiently represented in both the explicit and implicit world-models of AI developers. Specifically, irresponsible and fragile AI is more likely to be developed when overly simplistic models elide key aspects of the societal context surrounding real-world problems, leading to inaccuracy, bias, and violations of fairness and autonomy. To avoid these negative outcomes, we need strong frameworks for representing the causality and complexity that characterizes the societal landscape of AI development. At the same time, we must also account for the cognitive abilities and limitations of both the agents who develop AI and the agents who are impacted by its deployment. However, this representational goal is subject to a crucial challenge: some amount of abstraction is necessary for any scientific representation of any system. We must find a “Goldilocks zone” for representing societal context: representations must be complex enough to avoid failures of responsibility, but not so complex as to be intractable. 

In this working group, we’ll bring together computer scientists, responsible AI practitioners, cognitive scientists, systems scientists, experts in causal inference, and philosophers of science, with the goal of generating novel frameworks for representing societal context in the development of responsible AI. The schedule will prioritize shorter presentations and longer discussion periods, in order to foster genuine collaboration and communication across disciplines. Our intention is for this working group to be an initial step towards building a community of researchers across academia and industry that regularly collaborates on the development of safe, reliable, and responsible AI. 

 

Organizers

Donald MartinDonald MartinHead of Societal Context Understanding Tools & Solutions at Google Research
David KinneyDavid KinneyLecturer, Yale University; Visiting Faculty Researcher at Google Research
Melanie MitchellMelanie MitchellProfessor, Science Board Co-Chair + Science Steering Committee Member at SFI

More SFI Events