"Still Life" by Earl Horter (1939). Courtesy of Smithsonian Open Access.
Zoom
CounterBalance Seminar
  US Mountain Time
Speaker: 
Tina Eliassi-Rad, Renee DiResta, and Karine Mellata

Our campus is closed to the public for this event.

Overview: 

During the next year, SFI’s CounterBalance seminars will run a special series of meetings, called Deconstructing Meaning, looking at the technical and ethical complexities of content parsing in the age of AI. Co-hosted by Siegel Family Endowment, this series is being organized in collaboration between the Santa Fe Institute, the Trust & Safety Professional Association, and Google. The first virtual 90-minute session will take place on September 10th, 2024 at 9AM (US Mountain Time), and will explore the building blocks of content parsing. In addition to a panel discussion, this session will feature flash talks by Melanie Mitchell (SFI) and Vinay Rao (Anthropic).

Background:

The proliferation of digital content presents significant challenges for technology platforms who bear the onus of curating content responsibly. Accurate parsing of this content is key to upholding content responsibility, a broad term used to describe the maintenance of healthy online communities, protecting users, and preserving platforms’ reputations. Failure to accurately parse content can lead to the proliferation of misinformation, hate speech, and harmful content. Additionally, regulators striving to prevent societal harms increasingly scrutinize tech platforms’ content moderation practices. Reliable and robust parsing techniques help to demonstrate due diligence and compliance with emerging regulations surrounding online content. Accurate content parsing is a complex challenge that has proven elusive, despite the advances made by content moderation techniques and technologies; as distinguishing between intent, sentiment, and context poses intricate technical and ethical dilemmas. The rise of generative AI further complicates this landscape, with its ability to produce human-quality text that can both illuminate and obfuscate meaning. 

Natural language processing (NLP) and machine learning form the backbone of content parsing. Challenges arise due to the nuances of human expression and the increasing sophistication of generative AI models. Intent, for example, may be shrouded in sarcasm or disguised as humor, while sentiment can be multifaceted and easily misconstrued. Understanding context demands significant knowledge about cultural references, current events, and individual backgrounds. Additionally, real-time content parsing poses substantial computational demands. 

Addressing the complex challenges of content parsing, with its intertwined hardware, software and ethical considerations, holds profound significance for tech platforms, regulators, and the future of every online participant’s experience. 

 

Second Session

The second virtual 85-minute session will take place on will take place on November 22 at 9AM US Mountain Time, and will explore the ethical issues that can arise when applying AI to content parsing. The session is structured as a salon discussion with initial remarks by speakers Tina Eliassi-Rad (SFI & Northeastern), Renee DiResta, and Karine Mellata (Intrinsic), followed by a roundtable discussion with a panel of experts.

Speakers

Tina Eliassi-RadTina Eliassi-RadProfessor, Computer Science, Northeastern University; Science Steering Committee Member + External Professor at SFI
Renee DiRestaRenee DiRestaResearch Manager, Stanford Internet Observatory
Karine MellataKarine MellataCo-founder and CEO at Intrinsic

Panelists

Laura WeidingerLaura WeidingerStaff Research Scientist at Google Deepmind
Johnny SørakerJohnny Hartz SørakerEthics Lead at Google
Aaron RodericksAaron RodericksHead of Trust and Safety at Bluesky
Michael MullerMichael MullerSenior Research Scientist in the Human Centered AI group at IBM

Moderator

Seungwoong HaSeungwoong HaApplied Complexity Fellow, Siegel Research Fellow, Santa Fe Institute

Organizing Committee

Amanda MenkingAmanda MenkingResearch and Program Director at Trust and Safety Foundation
Charlotte WillnerCharlotte WillnerExecutive Director at Trust and Safety Professional Association
Jan EissfeldtJan EissfeldtGlobal Head, Trust & Safety at Wikimedia Foundation
Sujata MukherjeeSujata MukherjeeHead of Research, Trust and Safety at Google
William TracyWilliam TracyVice President for Applied Complexity, SFI

More SFI Events