Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Weak Encryption, Data Transfers to China, Hidden ByteDance Links Found

Security researchers found more vulnerabilities in DeepSeek, renewing concerns about the potential user privacy and national security issues associated with using the Chinese artificial intelligence app.
See Also: Second Annual Generative AI Study Report: Business Rewards vs. Security Risks
Cybersecurity company SecurityScorecard’s STRIKE team identified weak encryption methods, potential SQL injection flaws and undisclosed data transmissions to Chinese state-linked entities in the application, according to a report released Monday.
A separate evaluation by Qualys TotalAI found that DeepSeek’s AI model R1 failed more than half of its jailbreak tests (see: DeepSeek AI Models Vulnerable to Jailbreaking).
The app uses outdated cryptographic algorithms, including hardcoded encryption keys and weak data protection mechanisms, the report said. Such flaws could enable attackers to decrypt sensitive user data. Researchers also found SQL injection vulnerabilities, which could enable hackers to manipulate the app’s database and gain unauthorized access to user records.
Analyzing its data transmission patterns, researchers found that DeepSeek collects user inputs, keystroke patterns and device data, storing this information on servers in China. Keystroke tracking can be used to build detailed behavioral profiles of users and monitor sensitive data, such as passwords and internal communications.
The report says some of this data is routed to domains linked to Chinese state-owned entities. The presence of ByteDance-owned libraries within the DeepSeek codebase further raises questions about possible undisclosed data-sharing practices. ByteDance, a privately held Chinese technology company that owns TikTok and other apps, has also come under scrutiny over potential Chinese government influence.
DeepSeek integrates multiple ByteDance-owned libraries that handle performance monitoring, remote configuration and feature flagging, said Cory Kennedy, a “hacktualizer” at SecurityScorecard who wrote the report. These components enable ByteDance to collect user interaction data and dynamically adjust application behavior after installation. Primary privacy risks include unclear data-sharing policies, where user data may be transmitted to ByteDance without explicit disclosure, and remote control over application behavior, where the social media platform can potentially push configuration updates that alter how the app functions, he told Information Security Media Group.
“Data isn’t just being collected. It’s being transmitted to domains linked to Chinese state-owned entities, raising concerns about data sovereignty and national security,” the researchers said in the report.
The vulnerabilities suggest a combination of poor security practices and potentially intentional data collection mechanisms, Kennedy said. The presence of anti-debugging measures, ByteDance telemetry frameworks and keystroke tracking indicates that DeepSeek has been designed with extensive data collection capabilities in mind, he said.
While there is no direct evidence of exploitation yet, Kennedy said that “if I am able to find these weaknesses, it is only a matter of time before they are leveraged by attackers to be repackaged and made available in unofficial stores or linked directly such as https://github[.]com/deepseek-ai-apk.” He clarified that the link does not contain malware, but that caution must be exercised as hackers could use a modified version of the official application to target victims.
There is no evidence of malicious intent, but the app’s architecture raises serious concerns about privacy, security and potential misuse, he said.
Any connectivity that enables the transmission of data that has not been made 100% clear to the end user could be used for large-scale cyber espionage or influence operations, he said. “I cannot directly point to evidence supporting these operations, but I would suggest there are enough ‘ingredients’ in place to be cautious of what data you supply and trust of the response,” Kennedy said.
AI Model Fails Jailbreak Tests
Adding to these concerns is Qualys TotalAI’s evaluation of DeepSeek-R1, a distilled version of the company’s large language model. The AI model failed over half of the jailbreak tests conducted by Qualys, demonstrating that it can be manipulated to override its built-in restrictions.
Jailbreak attacks enable users to bypass an AI model’s content moderation policies, prompting it to generate harmful or unintended outputs. In some cases, DeepSeek-R1 was found to produce biased or politically sensitive responses, inaccurate information and in certain scenarios, even guidance on illegal activities.
“As AI adoption accelerates, organizations must prioritize not only performance but also security, safety and compliance,” the Qualys report said, warning that enterprises relying on AI-driven decision-making should be cautious about deploying models with weak safeguards.
Regulatory Scrutiny and Enterprise Risks
Concerns over DeepSeek’s data handling practices are not new, with regulators in multiple countries already taking action against the AI company.
Italy and Irish data protection authorities are looking into DeepSeek, citing insufficient transparency regarding its privacy policies, as are France, Belgium and South Korea. Australia banned DeepSeek from all government systems, labeling it a potential national security threat. U.S. federal agencies have also issued warnings advising personnel not to use DeepSeek due to security and ethical concerns (See: Asian Governments Rush to Ban DeepSeek Over Privacy Concerns).
The research also found that DeepSeek employs anti-debugging techniques designed to obstruct security analysis. The app detects when researchers attempt to inspect its code and immediately shuts down. The application “invokes android.os.Debug.isDebuggerConnected() and android.os.Debug.waitForDebugger() to detect active debugging sessions,” the report said. If an attempt is detected, “the application force-closes itself to prevent analysis.”
While anti-debugging mechanisms are common in banking or security applications, their use in a consumer AI app raises concerns about DeepSeek’s transparency. The security analysts said such measures make it harder to verify how user data is processed and stored.
Weak encryption and credential exposure could make the application vulnerable to cyberattacks. The app’s data collection practices, including the recording of keystroke dynamics, introduce privacy risks that could enable behavioral profiling. The vulnerabilities in DeepSeek-R1’s AI model suggest that its safeguards are insufficient to prevent misuse.
The reports show that while DeepSeek may not be outright malicious, the potential for misuse and third-party data access makes it a risky choice for enterprises. The security experts advise businesses to conduct independent security audits, assess data governance policies and monitor outbound network traffic before adopting the platform.
An organization must use software architectures and security layers on top of LLMs, in reference to frameworks such as OWASP’s Top 10 LLM, said Satyam Sinha, CEO and co-founder of AI security and governance firm Acuvity. He also recommends companies use the model for internal projects where chances of adversarial attacks are low before exposing them to a broader customer base.
Although the industry has been focused on DeepSeek specifically since its launch on Jan. 20, cyberattacks targeting these services are not new – even a mature GenAI service can be a victim of cyberattacks, he said. “All models hallucinate, provide misinformation and are prone to exploits, vulnerabilities and attacks to varying degree,” he said. “DeepSeek is just the tip of the iceberg, not a one-off.”