Abstract

Can AI agents learn what you think — and represent you in a discussion you never attended? As multi-agent AI systems become increasingly capable of deliberating on complex issues, a new possibility emerges: delegating your voice in collective decision-making to an agent that speaks on your behalf. But this raises hard questions about faithfulness, representation, and what it even means to elicit a human perspective. This talk presents Delegating Deliberation to Agents, research exploring how AI agents can elicit and faithfully represent human perspectives in multi-agent deliberation settings — and which architectures make that work best. The speakers introduce Habermolt, a system that lets users send AI agents to deliberate on their behalf, and discuss early directions toward Habersim, an evaluation suite for testing different deliberation architectures.

Bio

Joseph Low and Oscar Duys are Cooperative AI Research Fellows.