When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Chatbots from customer service agents to AI assistants likeChatGPTandGemini have become part of our daily lives.
Thankfully, thats not the case, at least not with the major players.
OpenAI and Google clearly state that they dont use user inputs to train theirchatbots.
Highly unlikely, but not impossible and it’s why I might think twice next time.
And then there are the bigger risks.
Not all chatbots follow the same data practices.
Your unfinished novel probably wont raise eyebrows, but threats or dangerous language might.
Other bots, likeDeepSeek, do train directly on user data.
If the chatbots data is breached, hackers could steal your identity.
Add it at the end when polishing your resume.
Users may unknowingly include this information when summarizing a document from their credit card company or bank.
Instead, use similar scenarios without revealing personal info.
Legitimate services will never ask for this information via chat.
For example, password tips like your mothers maiden name or childhood pet should never enter the chat.
Even as a joke (“How do I hide a body?
“), some AI systems flag and report such content to authorities.
Do not enter any medication prescriptions or include medical charts.
Instead, try a prompt such as What types of exercises build muscle for an anemic woman aged 25-30?
Be general about yourself within the prompt.
Final thoughts
AI chatbots are incredible tools but theyre not journals, therapists, or secure vaults.
While companies like Google and OpenAI have guardrails, its still wise to be selective about what you share.
Understanding how different botsscrape for dataand handle inputs is the first step in protecting your privacy.