Explainer

'I'd be very careful': What you're handing over when you ask AI for help

Chinese chatbot DeepSeek excited the world this week. But what's the cost of using it?

An illustration of a computer chip in the background with the logos of DeepSeek and OpenAI in the foreground.

There are concerns that AI could be used to profile and survey its users. Source: SBS News / Michael Dwyer/AP

A Chinese generative artificial intelligence (AI) application that surged in popularity this week has sparked fresh questions over the safety of such platforms.

The overtook its to become the top-rated free app on Apple's App Store in the United States on Monday.

The app's skyrocketing popularity also triggered a series of warnings, including some from , about data privacy.

"There are a lot of questions that will need to be answered in time on quality, consumer preferences, data and privacy management," Husic told the ABC earlier this week.

"I would be very careful about that. These types of issues need to be weighed up carefully," the minister for industry and science said.

O'Neil, who was minister for cyber security until July 2024, said Australia's national security agencies would be examining how DeepSeek works before issuing formal guidance to Australians.

In addition, authorities in several countries — including South Korea, Italy and France — have called on DeepSeek to explain how it manages the personal information of users.

What is DeepSeek?

The startup DeepSeek was founded in 2023 in Hangzhou, China and released its first AI large language model later that year.

Its CEO Liang Wenfeng also co-founded one of China's top hedge funds, High-Flyer, which focuses on AI-driven quantitative trading.

The fund, by 2022, had amassed a cluster of 10,000 of California-based Nvidia's high-performance A100 graphics processor chips, which are used to build and run AI systems, according to a post that summer on Chinese social media platform WeChat.

The US soon after restricted sales of those chips to China.
A mobile phone screen showing a list of apps
DeepSeek became the top-rated free app on Apple's App Store in the United States on Monday. Credit: Christoph Dernbach/picture alliance/dpa via Getty
DeepSeek has said its recent models were built with Nvidia's lower-performing H800 chips, which are not banned in China, sending a message that the fanciest hardware might not be needed for cutting-edge AI research.

DeepSeek began attracting more attention in the AI industry last month when it released a new AI model that it boasted was on par with similar models from US companies such as ChatGPT maker OpenAI, and was more cost-effective in its use of expensive Nvidia chips to train the system on troves of data.

The chatbot became more widely accessible when it appeared on Apple and Google app stores early this year.

But it was a follow-up research paper published last week . That paper was about another DeepSeek AI model called R1 that showed advanced "reasoning" skills — such as the ability to rethink its approach to a math problem — and was significantly cheaper than a similar model sold by OpenAI called o1.

What data does DeepSeek collect?

Dr Raffaele Ciriello, senior lecturer of business information systems at the University of Sydney, said generative AI is a "goldmine for profiling and potential surveillance".

"These platforms capture highly sensitive information, including health-related inquiries, legal concerns, and even users' emotional states," he told SBS News.

"Many people treat these chatbots as a personal confidant, therapist, or even romantic partner, making them a goldmine for profiling and potential surveillance."

DeepSeek's privacy policy states that it gathers personal information from users, which is subsequently stored on "secure servers" in China.

This includes your email address, phone number and date of birth, entered when creating an account. It also stores any user input including text and audio, as well as chat histories.

Information can be shared with third parties, including service providers, advertising partners, and its corporate group, and will be retained "for as long as necessary".
Dr Saeed Rehman, a senior lecturer in cybersecurity and networking at Flinders University, said data control policies in China could be "troubling".

"From a privacy and security perspective, DeepSeek's terms and conditions are similar to those of other AI providers," he said in a statement.

"[But] the fact that user data is stored on servers in China, a country known for its stringent data control policies, could be troubling for users (and governments) wary of their data privacy.

"This situation may evoke similar concerns to those raised for TikTok, where data privacy and security have been hotly debated and led to bans in some Western countries."

'False sense of security'

Ciriello raised similar concerns with popular United States-based engine OpenAI.

"OpenAI's privacy policy is vague about where data is stored, simply mentioning that it is kept on 'trusted service provider systems'. This lack of transparency leaves users unaware of who might access their data," he said.

"DeepSeek is clearer on this front, explicitly stating that data is stored in China and may be accessible to the Chinese government if requested."

Ciriello said OpenAI could also be subject to US laws.

"This does not mean OpenAI is necessarily safer — US laws, including the Clarifying Lawful Overseas Use of Data Act and Patriot Act, allow government agencies to access data stored by American companies, even if the data is stored overseas," he said.

"So, while DeepSeek is upfront about government access, OpenAI's lack of clarity creates a false sense of security."

Do we need stronger regulation?

Australia is working on regulating AI to manage potential risks to the community.

The federal government published a proposals paper and a voluntary AI Safety Standard last year and is gathering feedback on the need for mandatory safeguards for high-risk AI systems.

Dr Dana Mckay from the School of Computing Technologies at RMIT University said there is a need for legislation.

"Data laws may need to address some of the challenges brought by Generative AI — if you come up with a good idea by using these tools, who owns the idea," she told SBS News.

"If someone creates a bestselling novel based on data from your novel, which you didn't agree to add to the model, whose novel is it?

"If you want your data removed from a model, how does that work? These are areas of data that aren't well covered even by existing law."

With reporting by Reuters.

Share
6 min read
Published 1 February 2025 3:06pm
By Cameron Carr
Source: SBS News


Share this with family and friends