Tech

How trustworthy is AI?

Share
Share
ai chip
Credit: Pixabay/CC0 Public Domain

Artificial intelligence is everywhere—writing emails, recommending movies and even driving cars—but what about the AI you don’t see? Who (or what) is behind the scenes developing the algorithms that go unnoticed? And can it be trusted?

We spoke with two UC San Diego experts from the Halıcıoğlu Data Science Institute, part of the School of Computing, Information and Data Sciences (SCIDS). We asked both experts what’s next for AI, including its challenges, opportunities and natural limitations.

David Danks, MA ’99, Ph.D. ’01, professor of data science, philosophy and policy, examines not just how AI systems are built, but how they shape society. Lily Weng, assistant professor, leads the Trustworthy Machine Learning Lab, which ensures AI systems are robust, reliable, explainable and worthy of our trust.

How would you define AI at its most fundamental level?

Danks: AI is any system that replaces, enhances or augments human cognitive labor. Much like machines replaced physical labor, AI assists or improves our thinking tasks. While others may focus on the technical aspects, I emphasize a human-centered view—what AI enables us to do.

Weng: You can think of it as a system that works differently from humans but is designed with the goal of assisting us. While making AI smarter or more efficient is part of the process, its purpose is to benefit humanity.

As more people are adopting AI in their daily lives, what do you find most surprising?

Danks: For me, it’s people’s willingness to experiment and play with these systems. People seem very willing to spend their time and energy trying out the systems. Yet, experimentation does not often translate into continued use or trusting the system. And that’s important because we probably shouldn’t trust many of these systems right now. It has surprised me that in a society where so many people act as though they are very technology-averse and don’t like technology, they’re nonetheless very willing to experiment and interact with AI systems, at least when the stakes are low.

Where does ‘responsible AI’ fit in?

Weng: We want AI to be responsible and trusted by users and developers. For example, you want AI to tell you what it does and how it makes a decision. Then, we can monitor or audit whether there are biases or potential concerns when it reaches the decisions. We also want AI to be robust—you don’t want it to be sensitive to outside noise or an adversary that tries to manipulate it. We want AI to follow certain principles that we expect it to have.

The field of AI is changing so quickly; what should we be thinking about? What are you thinking about?

Weng: An issue that my lab has focused on is the opacity of AI. For example, the architecture of a deep-learning-based system is very complicated. So, although it has been tested and works well, there are many scenarios that can happen and many unexpected errors. My lab is working to ensure that an AI system is interpretable, and if it’s not, how we can make it more explainable.

Danks: I spend a fair amount of time thinking about all of the uses of AI that aren’t obvious. When I go to ChatGPT, I know I’m using AI. But when I drive my car—which these days is like a computer on wheels—I don’t know how much AI has been put in there by the car manufacturer. I don’t know to what extent they are using AI to control the engine or for surveillance purposes—perhaps they are selling my data to an insurance company.

So, while it’s great when AI can happen behind the scenes and make everything better, it also raises opportunities for additional risks where we will not have the appropriate recourse, particularly if we don’t even know that a harm has occurred. We need transparency about whether AI is being used, and we need transparency about what the AI is doing when it’s being used.

How will the newest school at UC San Diego, SCIDS, address some of the issues and questions you have presented?

Danks: I think the school creates enormous opportunities for us as researchers and educators because it starts to break down the silos between the fundamental and experimental research that occurs at the Halıcıoğlu Data Science Institute and the enterprise-grade commercial-level software deployments that are possible through the San Diego Supercomputer Center.

I think the school has the opportunity to see research go all the way from the lab to commercial and societal impact. And this creates not only opportunities, but also new obligations for us to do this responsibly and ethically and to show that responsible AI is not an oxymoron.

What is something that AI could never replace?

Danks: Any job tasks that require some sort of emotional or empathetic connection with another human being will be very hard. AI systems can simulate an empathetic connection, but building that long-term connection is something the systems will struggle with a lot.

Another area where AI systems will continue to struggle is dealing with situations where it’s not entirely clear what success means. These systems are built to optimize solutions to problems where we know what success is, but there are times when we don’t know what actually counts as success, and we figure it out by muddling through our lives.

What is exciting for you about the future of AI?

Weng: I’m very excited about the potential of AI in health care. For instance, how can we provide a higher quality of health care with the assistance of AI? But a major bottleneck is trustworthiness, as health care applications carry significantly higher risks and require stringent safety measures. They have a much higher standard than, say, a chatbot or the tools we use daily.

That’s why a trustworthy diagnosis becomes very important, especially when it comes to interpretability—we need to ensure the AI-driven decisions are transparent and reliable. I’m excited about the research in my lab and the work of my HDSI colleagues as we strive to develop AI systems that people can trust.

Any final thoughts on AI for our readers?

Danks: There is sometimes a temptation to think of AI as a hurricane that’s just bearing down on us: It’s going to transform our lives, maybe even wreck our lives, and there’s nothing we can do about it. But I think that’s the wrong way to think about technology because AI is built by us. AI is a future that we are building right now. We should view it as an opportunity rather than something we have no idea whether we will make it through.

Provided by
University of California – San Diego


Citation:
How trustworthy is AI? (2025, June 2)
retrieved 2 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Cortical Labs offers weekly access to real neuron computing for under the cost of a Nintendo Switch 2
Tech

Cortical Labs offers weekly access to real neuron computing for under the cost of a Nintendo Switch 2

Cortical Labs offers cloud access to neuron-powered computing for just $300 a...

Mac gaming just leveled up – Steam finally runs natively on Apple silicon
Tech

Mac gaming just leveled up – Steam finally runs natively on Apple silicon

The latest Steam beta runs natively on Apple silicon Apple is retiring...

New Kioxia 332-layer NAND tops out at 2Tb, bigger SSDs planned without the need for PLC
Tech

New Kioxia 332-layer NAND tops out at 2Tb, bigger SSDs planned without the need for PLC

Kioxia’s 332-layer NAND delivers 2Tb without using PLC tech Japanese flash giant...

NYT Strands hints and answers for Sunday, June 15 (game #469)
Tech

NYT Strands hints and answers for Sunday, June 15 (game #469)

Looking for a different day? A new NYT Strands puzzle appears at...