
If you have always wanted a personal shopping assistant, you may be in luck: The tech industry is creating digital ones that can buy things for you.
These agents, powered by artificial intelligence, might purchase groceries, book plane tickets, or get your Aunt Gertrude a birthday present. All you will have to do is hand over your credit card or other payment information and set a budget and preferences.
The shopping assistants are one application of a technology called agentic AI. The digital “agents” are empowered to make autonomous decisions and take actions to achieve goals, rather than simply responding to prompts the way that chatbots do.
Amazon debuted an AI-powered shopping assistant called Buy for Me (available to a limited set of beta users) in April, the same month that Visa and Mastercard introduced their own agentic commerce initiatives and PayPal released its Agentic Toolkit. Other entries in the field include Google AI Mode, which is set to link search to Google Pay in the coming months.
If you’re ready to try AI-assisted commerce, though, be forewarned: “Like almost anything in life which brings convenience, which makes us happier—sometimes—and more productive, it comes with its own risks,” says Bhaskar Chakravorti, dean of global business at The Fletcher School and executive director of the Institute for Business in the Global Context. Chakravorti chairs the Digital Planet program at Fletcher, which is dedicated to understanding the impact of digital innovations on the world.
Tufts Now spoke with Chakravorti to learn more the potential risks and benefits of agentic AI.
Many people are familiar with generative AI like ChatGPT. What’s different with agentic AI?
To use the example of booking plane tickets, let’s say you’re planning a trip to the Canadian Rockies. You’ve got a week in August, and you are going to start in Vancouver.
You can enter the information into your favorite generative AI model, and it will map out an itinerary for you. It will include a few suggestions for places to stay, restaurants, and so on.
You can add a few more prompts and it’ll refine the search. That is the, shall we say, old-fashioned AI.
Now agentic AI is saying, OK, you’ve got this itinerary mapped out, but now you’ve got tickets to be bought, you’ve got room reservations to be made, you’ve got to make some restaurant reservations because it’s the peak season. And the weather is always unpredictable, so we need to keep an eye out for the weather.
As you get closer to the date, you start getting alerts and maybe some flight changes because the weather is going to be iffy.
That is not just you asking questions and the model spitting answers back. Agentic AI starts to take control over decisions that you haven’t necessarily triggered.
It has a degree of autonomy; it has a degree of knowledge about you and a degree of awareness of what other sources of information should be tapped into to start making choices on your behalf. But it’s doing it on its own.
Can agentic AI use your credit card or digital payment system and book the tickets?
Yes, of course. You’d have to authorize that, so it can make these decisions on your behalf.
It knows you enough to recognize that you don’t like to connect through such and such airports or you don’t like flights that have a layover. It will use that to then book your tickets from Boston to Vancouver and then it’ll do a whole bunch of other things in terms of the kinds of hotels you like, the kinds of restaurant reservations you want to make.
It’s also dynamic, because it starts changing some of those decisions as new information comes to light.
It can say there is a potential thunderstorm in this part of the Rockies, so it will divert your itinerary on its own to this other part. And then it will give you an update.
If you don’t agree with it, you can intervene. It’s like somebody who works with you and is making some of the decisions. You have authority to go in and course correct.
In addition to saving time, what are the advantages of using agentic AI as a personal assistant?
Because it has access to so much more information and can do the processing very quickly relative to a human being, you could have a much better set of choices made available to you. You end up with better outcomes theoretically.
What are the potential disadvantages?
Using agentic AI exposes many sensitive aspects of our lives. It could be credit card information, health information, information about what kind of work we’re in, where we live, what clubs we belong to.
This agent has access to many of these things, and with digital systems, the more access we give them, the more open they are to being breached.
Another downside is that the system is autonomous to some degree, and it could make a bad decision.
In the case of groceries, if it substitutes broccoli for beans, you may be annoyed, but it is not the end of the world. But in other contexts, we may not want to rely on these systems.
Of course, people who are working on these agentic AI systems are trying to get ahead of problems, but still, we have lots of issues with the quality of the data, the accessibility of the data, the lack of ethical frameworks, and so on.
Most of all, even old-fashioned AI, which constitutes some of the building blocks of agentic AI, has a problem with “hallucinations”—a cute way of describing how it sometimes makes up information—that could lead to a really crazy mistake. If that false information gets tucked into a set of decisions an agentic system makes and the underlying logic of the systems isn’t clear (as it usually isn’t to most users), you may end up getting booked on a Blue Origin flight to the edge of space instead of traveling to Vancouver and a very spiffy-looking space suit having been pre-ordered for you. The chances are low, but it’s not impossible.
What would make agentic AI use safer?
We need guardrails. We need principles and frameworks, especially for highly sensitive issues, such as financial information or health care.
It seems that the federal regulatory system is not going to help us in the next few years. An alternative could be the states, but they would need to coordinate to have the same checks and balances, and Alabama is politically very different from Massachusetts.
There are three other levers that we should turn to.
One is technology itself, because the companies that are producing AI realize that if their systems are untrustworthy or they’re dangerous, people won’t adopt them.
Another pathway has to do with institutions, such as universities, other nonprofits, and training programs, that can develop better practices and cultures to create safeguards among users.
And the third is markets. Markets reward good behavior, responsible behavior, and users feeling positively disposed toward the machinery. So that creates an incentive system for the companies themselves to invest in these things even without regulatory oversight.
Citation:
AI personal assistants could buy your groceries and book your plane tickets (2025, July 8)
retrieved 8 July 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Leave a comment