Tech

Do we trust chatbots? New tool makes it easier to gauge

Share
Share
ChatGPT
Credit: Airam Dato-on from Pexels

As artificial intelligence tools like ChatGPT are integrated into our everyday lives, our interactions with AI chatbots online become more frequent. Are we welcoming them, or are we trying to push them away?

New research from Binghamton University is trying to answer those questions through VizTrust, an analytics tool to make user trust dynamics in human-AI communication visible and understandable.

Xin “Vision” Wang, a Ph.D. student at the Thomas J. Watson College of Engineering and Applied Science’s School of Systems Science and Industrial Engineering, is developing VizTrust as part of her dissertation.

She presented her current work and findings in April at the Association for Computing Machinery (ACM) CHI 2025 conference in Yokohama, Japan. The paper is available in the Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.

VizTrust was born out of a pressing challenge: User trust in AI agents is highly dynamic, context-dependent, and difficult to quantify using traditional methods.

“Most studies rely on post-conversation surveys, but they only can capture trust state before and after the human-AI interaction,” Wang said. “They miss the detailed, moment-by-moment signals that show why a user’s trust may rise or fall during an interaction.”

To address this, VizTrust evaluates user trust based on four dimensions grounded in social psychology: competence, benevolence, integrity and predictability. Additionally, VizTrust analyzes trust-relevant cues from user messages—such as emotional tone, engagement level and politeness strategies—using machine learning and natural language processing techniques to visualize changes in trust over the course of a conversation.

“The power of large language models and generative AI is rising, but we need to find out the user experience when people use different conversational applications,” Wang said. “Without diving in to see what exactly happened that influenced a bad experience, we can never really find out the best solution to improve the AI model.”

The research paper illustrates the functionality of VizTrust through a use case involving a software engineer stressed out by his job and a therapy chatbot designed to support workers. They discuss his work-related stress, and it offers him some advice on how to deal with it.

By analyzing subtle linguistic and behavioral shifts in user language and interaction, VizTrust pinpoints moments when trust is built or eroded. For example, VizTrust points out one moment when the trust level goes down because of repeated suggestions that the user doesn’t like. This type of insight is vital not only for academic understanding but also for practical improvements to conversational AI system design.

“Trust is not just a user issue—it’s a system issue,” Wang said, “With VizTrust, we’re giving developers, researchers and designers a new lens to see exactly where trust falters, so they can make meaningful upgrades to their AI system.”

VizTrust has already gained recognition by being accepted as a late-breaking work at CHI 2025. VizTrust stood out among more than 3,000 late-breaking submissions from around the world, when the competitive acceptance rate was just under 33%.

Co-authors on the project include SSIE Assistant Professors Sadamori Kojaku and Stephanie Tulk Jesso as well as Associate Professor David M. Neyens from Clemson University and Professor Min Sun Kim from the University of Hawaii at Manoa.

Wang is moving VizTrust to the next stage of development and will increase its adaptability to individual differences.

“When people interact with AI agents, they may have very different attitudes,” she said. “We may need to take a specific, individual perspective to understand their trust—for example, their personal characteristics, their implicit trust level, even their previous interactions with AI systems can influence their attitudes.”

Looking ahead, Wang envisions deploying VizTrust as a publicly available tool online to support broader research and development.

“By making VizTrust accessible,” she said, “we can begin to bridge the gap between technical performance and human experience and make AI system more human-centered and responsible.”

More information:
Xin Wang et al, VizTrust: A Visual Analytics Tool for Capturing User Trust Dynamics in Human-AI Communication, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (2025). DOI: 10.1145/3706599.3719798

Provided by
Binghamton University


Citation:
Do we trust chatbots? New tool makes it easier to gauge (2025, May 21)
retrieved 21 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
AI model mimics brain’s olfactory system to process noisy sensory data efficiently
Tech

AI model mimics brain’s olfactory system to process noisy sensory data efficiently

Raw and normalized synthetic sensor responses to test samples from two generated...

iPhone designer Jony Ive joins OpenAI, but don’t expect a new ChatGPT smartphone
Tech

iPhone designer Jony Ive joins OpenAI, but don’t expect a new ChatGPT smartphone

Jony Ive, who famously designed the iPhone (among other iconic Apple devices),...

High-quality OLED displays enable screens to emit distinct sounds from individual pixels
Tech

High-quality OLED displays enable screens to emit distinct sounds from individual pixels

a) Current Technology Limit : Sound Crosstalk b) POSTECH Research : Sound...