The Privacy Peril of AI: How ChatGPT, Gemini, and DeepSeek raise privacy concerns of Australian Users’ data

Kamal Hossain: Artificial Intelligence (AI) is transforming the way we interact with technology, making our lives more efficient and interconnected. AI-powered tools like ChatGPT, Gemini, and DeepSeek are at the forefront of this digital revolution, offering advanced conversational capabilities, personalized assistance, and content generation. However, behind the convenience lies a critical issue—privacy. For Australians, these AI tools pose significant risks, as their data collection practices, lack of transparency, and potential misuse raise serious concerns about online privacy and security.

Chatbots and AI-driven systems are becoming increasingly sophisticated. They can draft emails, generate essays, assist with research, and even provide mental health support. These capabilities make them invaluable in both professional and personal settings. However, the trade-off for such convenience is often the surrender of personal data, sometimes unknowingly.

ChatGPT, developed by OpenAI, is a popular AI model known for its ability to generate human-like text responses. OpenAI claims that ChatGPT does not store user conversations permanently, but details on temporary data retention remain vague. Users engaging with ChatGPT may unwittingly provide data that can be analyzed for model improvement, raising questions about data security and potential misuse.

Google’s Gemini, formerly known as Bard, is an AI chatbot integrated within Google’s vast ecosystem, drawing from data across Search, YouTube, Gmail, and other Google services. While this allows for highly personalized interactions, it also creates an enormous privacy risk. Google has a history of utilizing user data for targeted advertising, meaning that Gemini’s responses could be used to refine ad profiles, further blurring the line between AI assistance and corporate surveillance.

DeepSeek, a rising AI tool developed in China, has gained attention as an alternative to Western AI models. However, concerns over China’s strict data regulations and government oversight raise red flags about data privacy. Unlike OpenAI and Google, Chinese companies operate under national security laws that may allow government access to user data. This poses potential surveillance risks, especially for Australians who engage with such AI models.

With AI tools processing millions of interactions daily, Australian users are particularly vulnerable to data exploitation. One major privacy concern is data collection and storage. AI models require vast datasets to function effectively. However, the extent to which data is stored, analyzed, and repurposed is often unclear. Companies behind AI models may claim to anonymise data, but metadata and behavioral patterns can still be used to track user activity.

Most AI developers maintain proprietary models, offering limited disclosure on how data is used. While companies like OpenAI and Google provide some level of transparency, their policies remain ambiguous on data retention, external sharing, and third-party access. This lack of clarity makes it difficult for users to make informed decisions regarding their digital privacy.

AI chatbots have the capability to analyze user inputs and infer behavioral patterns, preferences, and even identities. For example, Google’s Gemini could cross-reference AI interactions with existing user profiles, creating more refined ad-targeting strategies. DeepSeek, operating under Chinese data policies, could theoretically allow government agencies to access user information, raising concerns over international surveillance.

Australia has strong privacy laws, including the Privacy Act 1988 and the Notifiable Data Breach scheme, designed to protect users from unauthorized data collection and breaches. However, enforcing these laws on international AI providers remains a challenge. Many AI companies operate outside Australian jurisdiction, making it difficult to ensure compliance with local data protection standards.

While AI privacy risks cannot be eliminated entirely, proactive measures can help safeguard user data. For Users, they can be selective with data Sharing by avoiding inputting sensitive personal information into AI chatbots. Also, using a Privacy-Enhancing Tools like VPNs, privacy-focused browsers, and ad blockers can minimize data tracking. Users should also be wary of AI Privacy Policies by understanding the terms and conditions of AI tools before using them.

As for Policymakers, they should strengthen AI regulations by implementing stricter laws to hold AI companies accountable for data misuse. The Government can also enhance Transparency Requirements by mandating companies to disclose how AI tools collect, store, and use data. This can be done by International Cooperation and working with global regulatory bodies to enforce data protection standards across borders.

AI tools like ChatGPT, Gemini, and DeepSeek are undeniably shaping the future of digital communication. Yet, as their influence grows, so do the risks associated with user privacy. Australians must be cautious about how they engage with these tools, understanding that AI’s convenience comes at the potential cost of personal data exposure. Without stronger regulatory safeguards and heightened public awareness, AI-powered platforms may evolve from helpful assistants to intrusive monitors. It is up to users, policymakers, and AI developers to strike the right balance between innovation and privacy protection, ensuring that Australians retain control over their digital identities in the AI age.

Author: Assistant Professor Kamal Hossain, International University of Business Agriculture Technology is an enthusiast of Technology, privacy and cybersecurity.

Author: Assistant Professor, College of Business Administration (CBA)

International University of Business Agriculture and Technology (IUBAT)

Leave a Reply

Your email address will not be published. Required fields are marked *