The emergence of DeepSeek, a Chinese-developed Large Language Model (LLM), has gained a lot of attention globally. While its cost efficiency and open-source nature have sparked discussions about AI accessibility and competitiveness, DeepSeek also poses significant security threats. From data privacy concerns to its susceptibility to cyber exploitation, this new AI model has raised red flags among governments, enterprises and security experts worldwide.
DeepSeek’s Rise and Its Geopolitical Implications
DeepSeek made headlines in early 2025 with claims of delivering a high-performance AI at a fraction of the cost of its Western counterparts. However, its rapid adoption has led to growing concern about security risks, particularly its ties to China’s data laws. Unlike AI models developed by Western companies, DeepSeek’s user data is stored on Chinese servers and is subject to China’s National Intelligence Law, which mandates cooperation with government authorities.
This geopolitical backdrop has led several governments, including Italy, Taiwan, Australia and South Korea, to ban or restrict DeepSeek’s use on government devices. Even U.S. agencies like NASA and the Navy have also advised against its use due to national security concerns. The Trump administration has initiated an investigation into DeepSeek’s security risks, adding to the scrutiny surrounding the model.
Security Risks Associated with DeepSeek
LLMs have already been used in cyberattacks, but DeepSeek’s vulnerabilities present an even greater threat. Several risks stand out:
1. Data privacy and espionage risks
DeepSeek’s data storage policies raise major red flags for enterprises and cybersecurity professionals. According to its privacy policy, user data — including chat history, IP addresses, keystroke patterns and device information — is stored in China and governed by PRC law. This means that sensitive data shared could be accessed by Chinese intelligence agencies without legal recourse and organizations using DeepSeek may unknowingly expose proprietary information, trade secrets or regulatory data to foreign surveillance. Finally, the platform’s integration with China Mobile, a state-linked entity, further heightens concerns about government oversight and data transmission.
2. Open-source exploitation and jailbreaking risks
While DeepSeek’s open-source nature is attractive to developers, it also introduces severe security vulnerabilities. Unlike proprietary models such as OpenAI’s GPT-4, which implement strict safeguards, DeepSeek allows unrestricted modifications, making it easier for cybercriminals to bypass safety mechanisms: DeepSeek has been found to have virtually no effective barriers against harmful content generation. Cisco’s security tests showed a 100% failure rate in blocking malicious prompts, compared to GPT-4o’s 86% and Google Gemini’s 64%. Cybercriminals are able to create malware and phishing kits, as Check Point researchers confirmed that DeepSeek is being actively exploited to generate infostealer malware, ransomware and phishing scripts that can evade traditional detection methods. They can also use DeepSeek to develop hacking tools, automate social engineering attacks and refine malicious code, reducing the expertise required to carry out sophisticated cyberattacks.
3. Poor Encryption and Data Security Practices
DeepSeek’s weak security posture extends beyond its open-source risks. Research from cybersecurity firms Wiz and Lasso Security has identified major vulnerabilities. For example, DeepSeek’s iOS app transmits sensitive user and device data over unencrypted channels, making it susceptible to interception. The app also employs 3DES encryption with hardcoded keys, a practice that leaves data vulnerable to decryption by threat actors. Lastly, a security audit revealed over a million lines of unsecured chat logs, backend data and user interactions, exposing sensitive information to potential attackers.
4. AI-Powered Social Engineering and Phishing Threats
DeepSeek’s ability to generate realistic and context-aware phishing emails is a major concern for enterprises. Cybercriminals can leverage the model to personalize spear phishing attacks—AI enhanced phishing measures can mimic legitimate corporate communications, increasing the likelihood of successful attacks. The model can additionally be leveraged to translate attacks into multiple languages, as DeepSeek can generate highly convincing phishing emails in multiple languages. Due to DeepSeek’s Ai-powered research capabilities, the model can also allow attackers to gather detailed intelligence on their victims, making social engineering attacks more precise and effective.
Industry Response and Mitigation Strategies
Given the escalating concerns, companies and governments have taken swift action to mitigate DeepSeek-related risks:
1. Government Bans and Enterprise Restrictions
As stated above, Italy, Taiwan, South Korea and Australia have blocked DeepSeek from government devices while numerous U.S. agencies have issued warnings against using the model. Tech giants like Microsoft and Amazon Web Services (AWS) have begun integrating DeepSeek into secure environments, ensuring tighter access controls.
2. Security Audits and Risk Assessments
Organizations must proactively assess AI models before deploying them in critical workflows. Recommended actions include conducting security audits to evaluate DeepSeek’s risk exposure and potential vulnerabilities, monitoring data flows to detect unauthorized data transmissions to DeepSeek servers and enforcing zero-trust policies to limit shadow AI usage and unauthorized access to sensitive corporate data.
3. Strengthening AI Governance and Compliance
To mitigate risks, there are a few steps cybersecurity teams can take. First, demand transparency: require AI vendors to disclose training data sources, security protocols and model governance policies. Next: implement counter-AI defenses: deploy tools that detect AI-generated phishing attempts, deepfakes and malicious code. Finally, invest in continuous AI monitoring: regularly update security policies to reflect emerging AI threats and ensure compliance with industry standards.
As with any form of AI, DeepSeek represents both an opportunity and a significant risk in cybersecurity. While it’s low-cost, open-source model has disrupted the AI industry, its security vulnerabilities and ties to Chinese data laws make it a high-risk tool for enterprises and government entities. The bottom line? Approach DeepSeek with caution.