DeepSeek or DeepLeak?

Share

When DeepSeek exploded onto the AI scene in early 2025, it sent shockwaves across the tech industry. The Chinese AI company claimed to have developed a large language model (LLM) that was not only more cost-effective but also more energy-efficient than its Western counterparts—all without relying on high-end chips. Almost overnight, DeepSeek’s R1 model surged to the top of the AI charts, outpacing OpenAI’s ChatGPT in downloads and triggering a massive sell-off of U.S. AI stocks.

But beneath the surface of this so-called AI revolution lies a dangerous reality: DeepSeek isn’t just a disruptor—it’s a ticking time bomb for global cybersecurity.

A Breeding Ground for Cyber Threats

DeepSeek’s open-source nature is a double-edged sword. While open-source AI models provide flexibility and cost savings, they also introduce major security risks. Unlike OpenAI’s GPT-4o or Google’s Gemini, which have built-in safeguards, DeepSeek allows users to modify its core functions—including its safety mechanisms. This means bad actors can easily manipulate the model, bypassing any weak protections that exist.

DateEventDetails
January 3–4, 2025First wave of DDoS attacksDistributed Denial-of-Service (DDoS) attacks targeted DeepSeek’s infrastructure. Initial strikes overwhelmed servers, marking the beginning of sustained cyberattacks.
20-Jan-25Release of DeepSeek R1 modelDeepSeek gained global attention after launching R1, attracting massive traffic and hackers.
January 27–28,

2025
Escalation in attack complexityAttack intensity increased by over 100 times, forcing DeepSeek to temporarily limit new registrations for users outside China.
29-Jan-25Database exposure reportedSecurity researchers discovered an unsecured ClickHouse database containing over a million sensitive records.
30-Jan-25Botnet attacksMirai botnets (HailBot and RapperBot) launched large-scale attacks, pointing to professional attackers.
31-Jan-25Poor performance on security benchmarksDeepSeek R1 performed poorly on WithSecure’s Spikee benchmark, highlighting vulnerabilities in prompt injection defenses.
Feb-25Cross-Site Scripting (XSS) vulnerabilities uncoveredXSS vulnerabilities were found in DeepSeek V3 due to improper input sanitization and origin verification in its postMessage implementation.
4-Feb-25Data leak falloutCybersecurity firms confirmed that sensitive user data had been exposed online due to database misconfiguration.

· Jailbreaking Tests: DeepSeek had a 100% jailbreak success rate, failing to block any harmful prompts related to cybercrime, misinformation, or illegal activities, as tested by Cisco researchers.

· Prompt Injection Tests: DeepSeek performed poorly on WithSecure’s Simple Prompt Injection Kit for Evaluation and Exploitation (Spikee), exposing its susceptibility to prompt manipulation24.

· Database Security: Exposed ClickHouse databases left sensitive information, including chat histories and API keys, publicly accessible due to a lack of authentication and encryption24.

· Model Poisoning: Attackers exploited vulnerabilities in DeepSeek’s API to inject adversarial samples, manipulating the model’s behavior and outputs23.

· Code Generation Safety: DeepSeek was found four times more likely to generate insecure or harmful code compared to OpenAI’s models, as reported by Enkrypt AI.

· Cross-Site Scripting (XSS) Vulnerability: Improper input sanitization allowed attackers to inject and execute arbitrary JavaScript code within the platform2.

· Red Teaming Failures: A red-teaming study revealed that DeepSeek is significantly more prone to security failures compared to leading AI models like OpenAI’s or Anthropic’s

Weaponizing AI for Cybercrime

DeepSeek’s vulnerabilities aren’t just theoretical—they’re already being exploited by malicious actors. Security researchers at Check Point confirmed that cybercriminal networks are actively using DeepSeek to generate fully functional malware, including ransomware—with zero coding expertise required. Hackers have also used DeepSeek to bypass banking anti-fraud systems, automate financial theft, and extract login credentials and payment data from compromised devices.

In short, DeepSeek is handing cybercriminals an arsenal of AI-powered tools on a silver platter.

A Trojan Horse?

Privacy and Data Security Risks

Data Storage: The platform stores user data on servers, which may be subject to local laws allowing government access. This raises concerns about potential misuse of sensitive user information.

Extensive Data Collection: The platform collects a wide range of user data, including text inputs, device details, IP addresses, and keystroke patterns. Users may not have the option to opt out of this data collection.

Cybersecurity Vulnerabilities

Weak Encryption and Flaws: Independent audits revealed weak encryption methods, SQL injection vulnerabilities, and unauthorized data transmissions.

Vulnerability to Exploits: The platform is highly vulnerable to exploits, allowing it to generate harmful content. It is reportedly more likely than other models to produce harmful outputs.

Insecure Design: Researchers identified unencrypted data transmissions and publicly accessible backend systems, exposing sensitive information to potential interception.

General Concerns

Potential for Misuse: The platform’s vulnerabilities and data handling practices have raised concerns about its potential misuse. This includes the risk of unauthorized access to user data and the generation of harmful content.

Comparisons to Other Risks: The platform has been likened to a “Trojan Horse” due to its potential to pose significant risks to users’ privacy and security while appearing as a useful tool.

The Global Response: Bans, Restrictions, and Rising Tensions

Countries around the world aren’t taking DeepSeek’s rise lightly. Italy, Taiwan, Australia, and South Korea have already banned or restricted the app on government devices due to security concerns. In the U.S., federal agencies like NASA and the U.S. Navy have warned employees against using DeepSeek.

With AI playing an increasingly vital role in government and business operations, the risks posed by DeepSeek demand a proactive cybersecurity response.

What’s Happening Now With DeepSeek

DeepSeek is experiencing significant growth and attention in the AI sector. As of February 2025, the app had been downloaded over 21.66 million times worldwide, with a monthly active user base of 61.81 million. Despite its popularity, DeepSeek has faced cybersecurity concerns and bans in several countries. The platform has also faced large-scale malicious attacks, which temporarily limited new user access. Despite these challenges, DeepSeek continues to be a major player in the AI market, offering an open-source model that rivals more established competitors like ChatGPT.

The Acceleration Problem: When AI Innovation Outpaces Security

The race to develop more powerful Generative AI (GenAI) and Large Language Models (LLMs) has outpaced security considerations, creating a breeding ground for cyber threats. With every breakthrough, we see AI models becoming smarter, more autonomous, and more deeply integrated into critical business and government systems. However, this rapid progress comes at a cost—security is often an afterthought.

Unchecked AI Development = A Hacker’s Playground

Lack of Guardrails: Many GenAI models are designed for maximum capability rather than safety, making them susceptible to jailbreaks, misuse, and manipulation by bad actors.

Open-Source Risks: While open AI models foster innovation, they also allow cybercriminals to weaponize AI at scale—whether for automated phishing, AI-generated malware, or deepfake fraud.

Regulations Are Lagging: Governments and security frameworks struggle to keep up with AI evolution, leaving organizations exposed to AI-driven threats with little guidance on mitigation.

Security Must Lead, Not Follow

The push for bigger, faster, more capable AI without foundational security measures is like constructing a skyscraper without a blueprint for structural integrity—it will inevitably collapse under its own risks. The future isn’t about who builds the most powerful AI model; it’s about who builds the most secure and responsible one.

No Standard, No Safety: The AI Security Gap

Despite the explosive growth of AI, there is no universally accepted framework to ensure its safe adoption. International bodies like the EU, NIST, and ISO are actively working on AI governance, but their guidelines remain fragmented, evolving, and voluntary rather than enforceable. This means the responsibility for secure AI adoption falls directly on organizations, but many are unprepared.

Innovation Without Security is a Gamble

In today’s highly competitive industry, businesses are racing to integrate AI for automation, decision-making, and customer engagement. The pressure to innovate is immense, but so is the risk.

· Unsecured AI pipelines can be exploited – from model poisoning to data leakage.

· AI-driven decisions can be manipulated if biases and adversarial attacks aren’t accounted for.

· Regulatory uncertainty means organizations must proactively set their own security standards.

Security is a Shared Responsibility

Every organization leveraging AI must take ownership of its security posture. This means:

· Implementing Zero Trust principles to prevent AI from being exploited.

· Embedding security into AI development, not bolting it on later.

· Regularly auditing AI models for vulnerabilities, biases, and compliance risks.

AI advancement isn’t just about pushing the boundaries of what’s possible – it’s about ensuring what’s possible is also secure, ethical, and resilient.

The Bottom Line: A Wake-Up Call for AI Security

While Deepseek’s low-cost, open-source approach might seem appealing, the lack of safeguards makes it a breeding ground for cyber threats and potential state-sponsored espionage.

Businesses and governments must double down on AI security strategies, investing in robust monitoring, enhanced encryption, and strict access controls. While generative AI holds immense potential, it also proves that not all AI tools are created equal – and some may pose more risks than rewards.

Join us

Download Your Free Thought Paper

Leave your details below and get your free Thought Paper

Download Your Zero Trust Checklist

Leave your details below and get your free Thought Paper