Building Pluralistic Artificial Intelligence – UROP Symposium

Building Pluralistic Artificial Intelligence

Prisha Agnihotri

Pronouns: She/Her

Research Mentor(s): Eric Gilbert
Research Mentor School/College/Department: / Information
Program:
Authors: Joshua Ashkinaze, Prisha Agnihotri
Session: Session 5: 2:40 pm – 3:30 pm
Poster: 9

Abstract

Our study addresses a critical issue facing Large Language Models (Large Language Models) today: inherent biases. Large Language Models are often trained on widely used datasets that may reflect or amplify societal biases. We propose an innovative solution to reduce these biases by integrating multiple explicitly biased Large Language Model agents, similar to the structure of democratic systems. Each person, or “agent” in this case, weighs in on a certain issue, leading to appropriate conflict resolution to build a concrete solution. By chaining these agents together, our approach mirrors the diverse viewpoints of a democratic society. We believe that constructing a system that simulates the perspectives of different people—rather than relying on a single language model—will foster a more nuanced and deliberative interaction, through use of nationally representative simulated personas. This method aims to create a more balanced and equitable representation of knowledge and opinions, potentially leading to more impartial and unbiased outcomes in Large Language Model applications.

Interdisciplinary, Social Sciences

lsa logoum logo