
Every time you open an app, visit the doctor, or make an online purchase, you’re generating data. That data feeds the artificial intelligence (AI) systems that help businesses improve services, doctors detect diseases faster, and governments make informed decisions.
But as AI becomes more powerful and reliant on personal information, concerns about how our data is being used—and whether it’s being kept safe—are growing louder. At the heart of this tension is a critical question: can we continue to benefit from smarter technology without giving up our privacy?
Sonakshi Garg, a doctoral student at Umeå University, believes the answer is yes. In her thesis titled “Bridging AI and Privacy: Solutions for High-Dimensional Data and Foundation Models,” Garg presents a set of innovative strategies that aim to ensure AI can be both intelligent and respectful of personal data. She calls this the “privacy paradox”: Do we choose strong AI or strong privacy?
“We no longer have to choose one or the other we can have both,” she argues.
To solve this issue, Garg uses manifold learning to simplify high-dimensional data while maintaining its meaningful structure. “Imagine unfolding a crumpled map without losing the roads and landmarks—this is what manifold learning does for complicated datasets,” says Garg.
She also introduces a hybrid privacy model that combines the strengths of two existing approaches, allowing users to better control how much information is protected while preserving more of the data’s usefulness. “It creates highly realistic “fake” data that behaves like the real thing but doesn’t reveal any actual person’s identity. This means researchers and developers can safely train AI systems without needing to access sensitive data,” Garg argues.
Finally, she addresses the privacy risks posed by large AI models like GPT and BERT, which can accidentally “memorize” private information. Her method compresses these models to make them smaller and more efficient while adding layers of privacy protection—allowing them to run securely even on personal devices like smartphones. Most importantly, Garg’s research empowers everyday people.
“It proves that it’s possible to benefit from personalized services and smart systems without giving up control over your personal life. Privacy isn’t an obstacle to progress—it’s a foundation for building better, more trustworthy AI.”
As technology becomes increasingly integrated into our lives, Sonakshi Garg’s research provides a much-needed blueprint for a future where AI and privacy can thrive side by side.
“My research is a bold and timely reminder that smart innovation should never come at the expense of human dignity ” and with the right tools, it doesn’t have to,” says Sonakshi.
This thesis addresses the growing tension between the power of AI and the need to protect personal privacy in an age of high-dimensional data. It identifies the weaknesses of existing privacy methods like k-anonymity and differential privacy when used on high-dimensional datasets and proposes improved solutions using manifold learning, synthetic data generation, and privacy-preserving model compression.
The research introduces advanced, scalable frameworks that enhance both data utility and privacy. Overall, the thesis offers a well-rounded approach to building ethical, privacy-aware AI systems that are practical for real-world applications.
More information:
Thesis: Bridging AI and privacy: Solutions for high-dimensional data and foundation models
Citation:
AI strategies promise smarter systems without sacrificing personal privacy (2025, June 2)
retrieved 2 June 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Leave a comment