Artificial Intelligence (AI), with its increasing capability and connectivity, extends beyond limited and well-defined contexts and is integrated into broader societal domains. AI algorithms now steer autonomous vehicle fleets, shape political beliefs through news filtering, and oversee resource allocation and labor. Establishing trust between humans and their AI counterparts becomes important to facilitate effective cooperation. Trust profoundly influences how individuals use, communicate with, and collaborate alongside AI systems. Thus, trust measurement and management within human-AI cooperation are indispensable for ensuring safety, efficiency, and overall success. This talk focuses on trust in human-AI interactions, addressing three primary questions: (1) How can we measure people’s trust in human-AI conversations? (2) How does trust change over time within human-AI conversations? (3) How can we effectively manage instances of overtrust or undertrust through conversational cues to enhance human-AI cooperation? This talk highlights critical advancements in measurement of trust dynamics in human-AI cooperation, promising to influence the future of AI integration into broader societal domains.
- Tags
-