Millions of Australians now use ChatGPT for everything from writing emails to debugging code to helping their kids with homework. But as it becomes part of daily life, a very reasonable question keeps coming up: is it actually safe?
The short answer is that ChatGPT itself is not dangerous — it won't hack your phone or steal your banking details. But how you use it, what you tell it, and what you download thinking it's ChatGPT — those are where the real risks live.
This guide breaks down exactly what ChatGPT knows about you, what it doesn't, and where the genuine dangers are.
What ChatGPT Actually Stores About You
When you create an OpenAI account and use ChatGPT, here's what they collect and retain:
Account Information
Your email address, name, and phone number (if provided for verification). If you're a ChatGPT Plus subscriber, your payment information is processed through Stripe — OpenAI stores billing details but not your full card number.
Conversation History
Every conversation you've had with ChatGPT is stored on OpenAI's servers, linked to your account. This includes every question you've asked and every response it gave. You can view and delete these in your settings.
Usage Data
Device type, browser, IP address, approximate location, and interaction patterns. This is similar to what most web services collect and is used for security, abuse prevention, and analytics.
Files and Uploads
Any images, documents, or files you upload to ChatGPT are stored temporarily for processing. Files shared in conversations may be retained as part of your conversation history until you delete them.
What ChatGPT Does NOT Do
There's a lot of misinformation floating around social media about what ChatGPT can access. Let's be clear about what it cannot do:
ChatGPT only knows what you tell it. It has no magical ability to access your device, accounts, or personal life. The privacy risk is entirely about what information you choose to share in your conversations.
The Real Privacy Risks
The genuine concerns aren't about ChatGPT spying on you. They're about the data you voluntarily hand over without thinking about it.
Things You Type Into Conversations
This is the big one. People paste passwords, API keys, confidential business documents, medical records, legal correspondence, and deeply personal information into ChatGPT without thinking about where that data goes. Once you type it, it's on OpenAI's servers.
Real example: Samsung banned employees from using ChatGPT after engineers pasted proprietary source code and internal meeting notes into conversations. That data became part of OpenAI's systems.
Training Data Usage
By default, OpenAI may use your conversations to train and improve future AI models. This means something you said in a private conversation could theoretically influence how the model responds to other users. You can opt out of this — but the default setting is opt-in.
Data Breaches
In March 2023, a bug in ChatGPT briefly exposed some users' conversation titles and payment information to other users. OpenAI patched it quickly, but it demonstrated that no platform is immune to security incidents. The more sensitive data you store in conversations, the higher the impact if a breach occurs.
Third-Party Plugins and GPTs
Custom GPTs and plugins created by third parties can request access to your conversation data. Some may send your inputs to external servers. Always check what permissions a custom GPT requires before using it, and avoid entering sensitive information in third-party GPTs.
How to Use ChatGPT Safely
You don't need to stop using ChatGPT — you just need to treat it like any other cloud service. Here's how:
Fake ChatGPT Apps — The Real Danger
Here's the irony: the biggest safety risk related to ChatGPT isn't ChatGPT itself — it's the flood of fake apps and websites pretending to be ChatGPT.
The real ChatGPT is only available at chat.openai.com, through the official OpenAI app on iOS and Android, or via the OpenAI API. Anything else claiming to be ChatGPT is either unofficial or outright malware.
Since ChatGPT exploded in popularity, scammers have been capitalising on the hype:
If you've installed a suspicious app or extension that claimed to be ChatGPT, your device may already be compromised. Get it professionally checked before continuing to use it for banking or sensitive accounts.
ChatGPT-Powered Scams
Beyond fake apps, scammers are also using ChatGPT and similar AI tools to supercharge their operations:
AI-Written Phishing Emails
Scammers use ChatGPT to write flawless phishing emails that perfectly mimic your bank, the ATO, Medicare, or your internet provider. No more broken English or obvious formatting errors — these emails are indistinguishable from legitimate correspondence. They can generate hundreds of personalised variations in minutes.
Fake Customer Service Chatbots
Scam websites now deploy AI chatbots that sound just like real customer support agents. They patiently walk you through "verifying your account" by collecting your personal details, login credentials, and even remote access to your computer — all while sounding perfectly professional and helpful.
AI-Generated Romance and Investment Scams
Scammers use AI chatbots to maintain convincing romantic conversations with dozens of victims simultaneously, never forgetting a detail, always available, always saying the right thing. The same technology powers fake investment advisors that build trust before directing you to fraudulent platforms.
For a deeper look at how AI is being weaponised by scammers in Australia, read our full guide: AI Scams Targeting Australians — How They Work & How to Stay Safe.
Kids and ChatGPT — What Parents Need to Know
Children and teenagers are among the heaviest users of AI chatbots. Here's what parents should understand:
- Age requirement: OpenAI's terms require users to be at least 13 years old, and users under 18 need parental consent. There's no robust age verification in practice — any child with an email address can sign up.
- Inappropriate content: While ChatGPT has safety filters, determined users (including children) can sometimes find ways around them. The filters are not a substitute for parental oversight.
- Over-reliance for schoolwork: Many students use ChatGPT to write assignments, generate answers, and complete homework. Beyond academic integrity concerns, this can undermine genuine learning. Schools increasingly use AI detection tools.
- Personal information sharing: Children may not understand the implications of sharing personal details, school names, addresses, or family information with an AI chatbot. That data is stored on servers they have no control over.
- Emotional dependency: Some children form quasi-personal relationships with AI chatbots, treating them as friends or counsellors. While ChatGPT can be a useful tool, it is not a substitute for human connection or professional support.
Parental action: Talk to your children about what ChatGPT is (a text prediction tool, not a person), what not to share with it (personal details, school information, photos), and check their conversation history periodically. Consider using OpenAI's parental controls if available.
How to Check and Control Your ChatGPT Data
OpenAI provides tools to manage your data. Here's how to use them:
- Open ChatGPT Settings — Log into chat.openai.com and click your profile icon in the bottom-left corner, then select "Settings."
- Go to Data Controls — Click "Data Controls" in the settings menu. This is your privacy command centre.
- Disable model training — Toggle off "Improve the model for everyone." This prevents your future conversations from being used to train OpenAI's models.
- Export your data — Click "Export data" to receive a complete download of everything OpenAI has stored about you — conversations, account details, and usage data. You'll receive an email with a download link.
- Delete conversations — You can delete individual conversations from the sidebar, or go to Settings > General > "Clear all chats" to wipe everything at once.
- Delete your account entirely — If you want everything gone, go to Settings > General > "Delete account." OpenAI is required to process this within 30 days.
OpenAI's Privacy Policy in Plain English
OpenAI's privacy policy is lengthy and written in legal language. Here's what it actually means for you:
- They collect what you give them — Account info, conversations, files you upload, and usage data. Nothing unusual for a cloud service, but the conversations make it uniquely personal.
- They can use your data for training — Unless you opt out. For free users, the default is opt-in. ChatGPT Plus, Team, and Enterprise accounts have this disabled by default.
- They share data with service providers — Hosting providers, payment processors, analytics services. They state they don't sell your personal data to third parties.
- They may disclose data to authorities — If required by law, subpoena, or to prevent harm. This is standard for all tech companies.
- Data is stored in the United States — Your conversations are processed and stored on servers in the US, which means they're subject to US law, not Australian law, in terms of government access.
- Retention is vague — OpenAI states they retain data "as long as necessary" for providing the service. Deleted conversations may persist in backups for a period before permanent deletion.
Australian Privacy Act Implications
As an Australian user, you have rights under the Privacy Act 1988 and the Australian Privacy Principles (APPs). Here's how they apply to ChatGPT:
- Right to know: You can ask OpenAI what personal information they hold about you (APP 12). Their data export tool satisfies this, but you can also make a formal request.
- Right to correction: You can request corrections to inaccurate personal information (APP 13).
- Cross-border data transfer: OpenAI transfers your data to the US. Under APP 8, they remain accountable for how your data is handled overseas.
- Complaints: If you believe OpenAI has mishandled your data, you can lodge a complaint with the Office of the Australian Information Commissioner (OAIC).
- Ongoing reform: The Australian government is actively reviewing AI regulation and privacy law reform. New rules around AI transparency and data handling are expected in coming years, which may impose stricter requirements on services like ChatGPT.
The fact that your data is stored in the US doesn't mean Australian law doesn't apply. The OAIC has jurisdiction over how overseas companies handle Australian users' personal information, and OpenAI must comply with the Privacy Act when dealing with Australian data.
The Bottom Line
ChatGPT is a useful tool, and for most people, it's perfectly safe to use — as long as you treat it like what it is: a cloud service run by an American company that stores everything you type on their servers.
The real dangers aren't ChatGPT itself. They're the fake apps and extensions that install malware on your device, the scammers using AI to write better phishing emails, and the sensitive information people voluntarily paste into conversations without thinking about where it goes.
Use it, enjoy it, but keep your passwords out of it, keep an eye on your kids' usage, and for the love of all things digital — only download it from chat.openai.com.
External Resources
Think you've installed a fake ChatGPT app?
If you've downloaded a suspicious AI app, browser extension, or clicked a dodgy link claiming to be ChatGPT — bring your device in. We'll scan for malware, remove any threats, and make sure your accounts haven't been compromised.
Book a Security Checkup