Innovative Technology Developed by Indian Researchers to Safeguard Identities in AI Photo Editing
A consortium of researchers from India has pioneered a groundbreaking, patent-pending technology designed to avert identity leaks during the process of AI photo editing.
The initiative, led by Dipesh Tamboli, Vaneet Aggarwal, and Vineet Punyamoorty of Purdue University, established the foundational architecture.
They collaborated with technical associate Atharv Pawar from the University of Michigan, contributing to a research framework that fortifies personal images prior to their submission to third-party AI services.
The team elaborated on their project during a dialogue with HindustanTimes.com. Dipesh recounted that the impetus for this invention arose from a pivotal moment in early 2025, when AI-generated “Ghibli-style” filters became a viral sensation.
These Ghibli-style portraits captured the online attention last week as users clamored for AI to transform their images into an enchanting anime aesthetic.
While many enthusiasts embraced OpenAI’s novel feature, critics condemned the trend, labeling it “disrespectful” to Studio Ghibli’s co-founder Hayao Miyazaki, who had previously denounced AI-generated animation as “an insult to life itself,” asserting his aversion to integrating such technology into his creative endeavors.
Origin and Unique Attributes of the Technology
“Millions of users were uploading personal photographs to morph into cartoons, coinciding with government advisories—including the Indian government’s—regarding the dangers of submitting biometric data to external servers,” Dipesh revealed to HindustanTimes.com.
This presents a significant ‘privacy tax’: accessing these creative utilities necessitates sacrificing one’s visage. I recognized that upon uploading high-resolution biometric data, users relinquish all control.
This realization sparked the query: How can we achieve remarkable AI results while maintaining privacy? This led to the inception of PrivateEdit.
As this innovative technology emerges, questions arise regarding its distinctiveness. Dipesh provided clarification.
“Current privacy solutions are predominantly ‘reactive’—they attempt to address issues post-data transmission. In contrast, PrivateEdit exemplifies ‘Privacy by Design.’ We’ve devised a means to ‘decouple’ your identity from the image itself.
Our technology is novel in that it integrates seamlessly with existing AI models—like Midjourney or ChatGPT—without necessitating any modifications.
Additionally, we’ve introduced a ‘Trust Slider’ that empowers users; individuals can determine the extent of information they wish to conceal, tailored to their trust levels for each platform. This personalized safeguarding mechanism is unprecedented,” he elucidated.
Mechanism of the Technology
Vineet, a doctoral candidate in computer and electrical engineering, detailed the operational framework of the technology.
“We constructed a pipeline functioning as a ‘secure filter’ linking users and AI. Rather than dispatching unedited images to the cloud, our solution initially processes photos locally on the user’s device,” he articulated to HindustanTimes.com.
Utilizing sophisticated segmentation, it identifies the ‘identity-sensitive’ attributes of your face—unique markers that constitute your identity—and superimposes a digital mask. We then transmit only the ‘background’ alongside the masked version to the AI.
After executing the requested edits, the amended photo returns to your device, where your authentic facial details are reinstated. The AI accomplishes its task while never actually accessing your true likeness.
Concerns may arise as to whether this technology is accessible to non-experts or merely suitable for tech-savvy users. Dipesh emphasized their commitment to crafting an intuitive interface.
“The ambition is for this to function akin to a conventional photo editing application. Users need not comprehend AI mechanisms or ‘segmentation’; instead, they simply manipulate a slider to select their privacy level, while the app undertakes the intricate ‘masking’ and ‘reconstruction’ processes unobtrusively in the background. Upholding privacy should not be burdensome; it ought to mimic the ease of applying a filter,” he asserted.
Significant Privacy Threats Associated with AI Editing Tools
Vaneet, a University Faculty Scholar and the Reilly Professor of Industrial Engineering with affiliations in the Departments of Computer Science and Electrical and Computer Engineering, delineated the pressing risks, predominantly centered around Data Persistence and Function Creep.
Many users erroneously believe their images are erased post-filter application; however, frequently this data becomes an enduring digital footprint utilized for surveillance, profiling, or fuelling future models without explicit authorization.
In contemporary contexts, personal biometric identities are commodified. Progressing toward ‘Privacy-by-Design’ frameworks, such as ours, is imperative to ensure the AI revolution does not encroach upon fundamental human liberties, Vaneet articulated, highlighting the research group’s collective effort that included Dipesh and Vineet.
Atharv, the technical collaborator, elucidated the risks mitigated through the Purdue researchers’ innovative methodology.
Upon uploading unprocessed images, they risk indefinite storage, potential exposure from server breaches, or misuse in the training of ‘deepfakes’ without user consent. By employing our masking solution, sensitive data is never transmitted to cloud infrastructures.
This technology also benefits organizations; it enables them to offer AI photo editing capabilities without the overwhelming legal and ethical responsibilities associated with storing the private facial data of numerous individuals, he explained.
Atharv posited that this technology holds promise for prominent corporations like Adobe, Apple, or Google, describing it as “the ideal future for this technology.”
“Due to our pipeline’s compatibility with existing AI models, it can seamlessly integrate into current applications as a ‘Privacy Layer.’
This would empower large technology firms to deliver exceptional generative features while assuring users: ‘We never actually view your raw photos.’ This arrangement is mutually beneficial for corporate reputation and user security,” he expounded.
Implications of This Research for Future AI Regulations and Legislation
Vaneet remarked upon the global struggle of governments in regulating AI technologies.
“Most legislative frameworks concentrate on post-data acquisition practices. Our innovation provides a technical approach to ‘data minimization’—a cornerstone of privacy laws such as GDPR.
By demonstrating that we can attain high-quality outcomes without accumulating sensitive data from the outset, we are outlining a paradigm for future AI regulatory frameworks,” he explained.
Vineet acknowledged the primary challenge posed by masking techniques, which risk rendering the final AI-edited image inauthentic or diminishing its quality.
Over-masking can lead to context loss, resulting in distorted images, while insufficient masking compromises privacy.
We developed a ‘smart blending’ technique that furnishes the AI with adequate context regarding lighting and shadows in the scene without exposing actual biometric identifiers.
The outcome is a high-quality, professional image where the “seams” connecting your true face and the AI’s edits remain imperceptible, he elaborated.
In conclusion, Dipesh emphasized a pivotal reminder for users engaging with AI tools moving forward: “Innovation should not necessitate compromising your identity.”

Historically, we’ve been conditioned to believe that optimal technology demands data relinquishment. Our research validates that this is a fallacy. One can harness the power of cutting-edge AI while safeguarding their privacy.
The dichotomy between creativity and safety should never exist, he concluded, stating that the forthcoming significant milestone will be Verifiable Data Sovereignty.
It is insufficient for a company to merely pledge non-use of your data; we require technical systems enabling users to mathematically verify that their data is utilized solely for designated purposes and promptly discarded.
Integrating this with on-device processing will be crucial in forging an AI landscape where innovation and personal security coexist harmoniously, stated Dipesh.
Source link: Hindustantimes.com.





