This narrative is drawn from an interview with Harsh Varshney, 31, a Google employee based in New York. It has been condensed for brevity and clarity.
AI has quickly integrated into our everyday activities, and I find it hard to envision life without AI tools.
They assist me daily with comprehensive research, coding, note-taking, and online inquiries.
However, my role has heightened my awareness of the privacy issues regarding AI usage. Since joining Google in 2023, I initially worked as a software engineer on the privacy team, creating infrastructure to safeguard user data. Currently, I’m part of the Chrome AI security team, focusing on defending Google Chrome against cyber threats like hackers and those exploiting AI agents for phishing.
AI models generate responses using data, and it’s crucial for users to safeguard their private information to prevent access by malicious actors such as cybercriminals and data brokers.
I’ve adopted four practices I consider vital for protecting my data when using AI.
Treat AI Like a Public Postcard
A false sense of familiarity with AI can prompt individuals to divulge personal information they normally wouldn’t share. While some AI firms have teams working on enhancing privacy, it’s unwise to provide sensitive details like credit card numbers or personal medical history to AI chatbots.
Information shared with public AI chatbots might be utilized for training future models, creating risks such as “training leakage,” where the model might recall personal information about one user and inadvertently reveal it to another. Additionally, data breaches could expose shared information.
I view AI chatbots as akin to public postcards. If I wouldn’t pen certain details on a postcard visible to anyone, I avoid sharing them with public AI tools due to uncertainties about future data utilization.
Understand the ‘Room’ You’re In
Recognizing whether you’re using a public AI tool or an enterprise-grade model is essential. While the usage of conversations for training public AI models remains unclear, companies can invest in “enterprise” models specifically designed not to train on user conversations, which allows employees to discuss work-related matters more safely.
This can be likened to speaking in a noisy coffee shop versus having a confidential meeting in your office. There have been reports of employees inadvertently leaking company information to models like ChatGPT. For proprietary projects or patents, it’s advisable to avoid conversations with public chatbots due to potential information leakage.
I refrain from discussing my Google projects with public chatbots and use an enterprise model for even minor tasks. This approach allows me to share information more securely, though I still limit personal data disclosure.
Regularly Delete Your History
AI chatbots typically retain a history of interactions, so I recommend regularly deleting this history on both enterprise and public models to enhance user privacy. This precaution is wise, even when confident that no sensitive data is shared.
I was startled when an enterprise Gemini chatbot accurately provided my home address, which I hadn’t knowingly shared. It turned out I had previously asked for help refining an email containing my address. Due to its long-term memory features, it could recall this detail from previous conversations.
Utilize Recognized AI Tools
Opting for well-regarded AI tools is advisable, as they are more likely to have established privacy protocols in place. In addition to Google’s offerings, I prefer using OpenAI’s ChatGPT and Anthropic’s Claude.
Reviewing the privacy policies of the tools you use is also beneficial; these often explain how your data contributes to model training. In the privacy settings, look for options to prevent your conversations from being utilized for training purposes.
While AI technology holds remarkable potential, we must prioritize the safety of our data and identities during its use.
If you have a unique experience utilizing AI at work, reach out to this reporter at [email protected].

