7 min read

Cyber Threat Intelligence Report

Cyber Threat Intelligence Report

This week we briefed our clients on data privacy concerns inherent with new AI tools hitting the market like an avalanche. We dug into DeepSeek and others.

 

 KEY TAKEAWAYS 

  • AI tools such as DeepSeek have security and data privacy considerations for organizations. Learn what the risks are, and how to best protect your organization.
  • Critical vulnerabilities in SimpleHelp RMM, Cisco ISE, Veeam, and Cacti . Patch now!



 

Data Privacy with DeepSeek and Other AI Tools

Over the last several months the world has seen an avalanche of new AI models hitting the market. Most recent was the unveiling of DeepSeek, the low-cost Chinese competitor to OpenAI's ChatGPT. AI data privacy has always been an issue, but now that there are international alternatives hitting the market, data privacy has again come to the forefront of concerns. How should organizations treat AI, and which model or version, if any, should they use?

 

Are DeepSeek Privacy Concerns Legitimate

There has been a lot of fear, uncertainty, and doubt (FUD) spread about AI in recent months, and DeepSeek is no exception. While many things have been exaggerated, there have been certain findings that are worth noting.

First is the security of DeepSeek itself, both the web application and mobile phone application. On January 29, researchers at Wiz discovered a publicly accessible database that belongs to DeepSeek, where Wiz researchers were able to access sensitive DeepSeek data, including chat history. Discoveries like this imply that not only do organizations need to be concerned about what DeepSeek is doing with their data, but can their data be accessed by unauthorized 3rd parties?

Researchers at NowSecure did a security and privacy assessment on the DeepSeek iOS mobile application, and discovered several major security vulnerabilities. They found that the mobile app sent data over the internet without encryption, data that is encrypted on the mobile device uses a deprecated encryption algorithm and also hardcodes the decryption keys on the device, username and password of the app user are stored insecurely, device data collection and fingerprinting were very extensive, and this data is transmitted to Chinese servers controlled by ByteDance, the parent company of TikTok. Based on their findings, NowSecure strongly urges enterprises to prohibit this application in their organizations.

 

Fake Applications and Impersonations

Any time a new AI service is released, threat actors are quick to deploy fake versions that contain malware. For example, two malicious Python libraries, deepseeek 0.0.8 and deepseekai 0.0.8 were uploaded to PyPI on January 29. These malicious packages contained infostealer malware that harvests API keys, database credentials, and infrastructure access tokens. 

Threat actors are also abusing social media to spread fake download links. These fake pages typically look nearly identical to the actual download page and use typo-squatted domains (domains that look similar to the original) to trick victims into downloading malicious packages.

 

How are 3rd Parties Leveraging Your Data, with or without AI

In addition to how an organization handles their data internally, considerations must be made for how 3rd party sites are using their data. With the rapid and widespread adoption of AI, companies are quickly adding AI to their services. Organizations should review what data is being sent where, and take into consideration the risk of AI services handling that data.

 

What are the options?

Other Cloud-based AI ServicesFor many organizations, usage of AI may be an acceptable risk. There is a wide range of cloud-based AI platforms to choose from. When selecting a cloud AI platform of choice, be sure to thoroughly read the data privacy agreements for each site and choose the one that best fits your risk appetite. Once the organization chooses which AI platform to use, controls should be put in place to block/prevent users from accessing other AI sites.

Self-hosted InstancesSome organizations may wish to leverage AI technology, but not want to send their data to a 3rd party AI site. In this situation, organizations have the option of running their own AI model instance internally. This option allows the organization to have the most granular control over their AI deployment. They can choose exactly which AI model to use, what data is used to train it, and can establish strict controls over its usage. This method is not without its own risk, as popular AI model sites such as Hugging Face have been found to host malicious AI models. Additional options include using cloud infrastructure to host custom AI models, such as Azure AI Foundry.

No AI usageFor some organizations, AI usage may carry too much risk. Fully preventing AI usage in an environment is no small task. Prevention of AI usage must be implemented at both the policy level, technical level, and physical level. Below are some potential steps that would need to be implemented in order to fully ban AI usage:

  • Update Acceptable Use Policy to clearly state the use of AI tools on company devices or on the company network is prohibited.
  • User Awareness Training - Employees need to be educated about the risks of AI usage and acknowledge the new Acceptable Use Policy.
  • Network-level Protections - Implement firewall rules to block AI-related domains. Additional network monitoring will also be required to search for attempts to bypass these restrictions. Network monitoring tools such as PacketWatch can reveal traffic to known AI domains.
  • Network Isolation - Network segments with sensitive data should be isolated from general-use networks.
  • Application Allow-listing - Only allow pre-approved software to run in the environment.
  • EDR Tools can be used to enforce policies that prevent the installation of AI tools.
  • Data Loss Prevention (DLP) tools can monitor for attempts to transmit sensitive data.
  • Limit or restrict the use of personal devices on the corporate network (BYOD policy).
  • Network Access Control - Ensure only authorized devices are allowed to connect to the network.
  • Continuous Monitoring - Network traffic and endpoint compliance must constantly be reviewed to ensure policies are being enforced.

 

Conclusion

There is no one-size-fits-all solution to the AI problem. When used appropriately, AI tools can have a huge impact on work productivity. Each organization must thoroughly evaluate the risks AI usage has for them, both from a data privacy and network security perspective. Once those risks have been defined, a policy must be created and enforced. 

 

Resources:

 

 

Vulnerability Roundup

 

SimpleHelp RMM Vulnerabilities Under Active Exploitation

A set of vulnerabilities in SimpleHelp versions 5.5.7 and earlier, tracked as CVE-2024-57726, CVE-2024-57727, and CVE-2024-57728, have been reported to be under active exploitation. Threat actors are targeting vulnerable SimpleHelp instances to gain initial access to target environments. Once the threat actor gained an initial foothold, they were observed deploying Sliver, a command-and-control utility similar to Cobalt Strike. Many of the TTPs observed overlap with the Akira ransomware group, however, positive attribution is not certain at this time. Administrators are strongly urged to apply SimpleHelp patches; details for updating can be found here. Additionally, it is highly recommended to place strict controls on which RMM tools are allowed in the environment. If SimpleHelp is the RMM of choice, all other RMMs should be removed or disabled. A PacketWatch query to search for known indicators of compromise for this campaign can be found below:

\*.ip:(213.173.45.230 OR 194.76.227.171 OR 45.9.148.136 OR 45.9.146.112)

Critical RCE Flaw in Cisco Identity Services Engine (ISE)

Cisco disclosed a set of vulnerabilities in their Identity Services Engine (ISE) platform, tracked as CVE-2025-20124 and CVE-2024-20125. Successful exploitation could allow an "authenticated, remote attacker to execute arbitrary commands and elevate privileges on an affected device". Administrators are urged to patch as soon as possible. Below are the affected versions and their corresponding fixed release:

 

2025-02-10-cisco

Fig. 1: Vulnerable Cisco ISE versions and Fixed Releases | Source: Cisco

 

Code Execution Vulnerability in Multiple Veeam Products

Veeam recently disclosed a critical vulnerability affecting multiple Veeam products. Tracked as CVE-2025-23114, this vulnerability "allows an attacker to utilize Man-in-the-Middle attack(s) to execute arbitrary code on the affected appliance server with root-level permissions". Below are the list of affected products:

  • Veeam Backup for Salesforce — 3.1 and older
  • Veeam Backup for Nutanix AHV — 5.0 | 5.1 (Versions 6 and higher are unaffected by the flaw)
  • Veeam Backup for AWS — 6a | 7 (Version 8 is unaffected by the flaw)
  • Veeam Backup for Microsoft Azure — 5a | 6 (Version 7 is unaffected by the flaw)
  • Veeam Backup for Google Cloud — 4 | 5 (Version 6 is unaffected by the flaw)
  • Veeam Backup for Oracle Linux Virtualization Manager and Red Hat Virtualization — 3 | 4.0 | 4.1 (Versions 5 and higher are unaffected by the flaw)

Below are the fixed versions:

  • Veeam Backup for Salesforce - Veeam Updater component version 7.9.0.1124
  • Veeam Backup for Nutanix AHV - Veeam Updater component version 9.0.0.1125
  • Veeam Backup for AWS - Veeam Updater component version 9.0.0.1126
  • Veeam Backup for Microsoft Azure - Veeam Updater component version 9.0.0.1128
  • Veeam Backup for Google Cloud - Veeam Updater component version 9.0.0.1128
  • Veeam Backup for Oracle Linux Virtualization Manager and Red Hat Virtualization - Veeam Updater component version 9.0.0.1127

Per the vendor, it should also be noted that "if a Veeam Backup & Replication deployment is not protecting AWS, Google Cloud, Microsoft Azure, Nutanix AHV, or Oracle Linux VM/Red Hat Virtualization, such a deployment is not impacted by the vulnerability." Please refer to the vulnerability disclosure for further details.

 

Critical RCE Flaw in Cacti Open-Source Network Monitoring

A new vulnerability in Cacti open-source network monitoring, tracked as CVE-2025-22604, could allow an authenticated attacker with device management permissions to execute arbitrary code on the server, and would be able to steal, edit, or delete data from the server. This vulnerability affects Cacti versions 1.2.28 and prior. Administrators are urged to patch to version 1.2.29 or later as soon as possible.





 

This report is provided FREE to the cybersecurity community.

Visit our Cyber Threat Intelligence Blog for additional reports.

 


Subscribe to be notified of future Reports:


NOTE
We have enhanced our report with data from SOCRadar. You may need to register to view their threat intelligence content.

DISCLAIMER
Kindly be advised that the information contained in this article is presented with no final evaluation and should be considered raw data. The sole purpose of this information is to provide situational awareness based on the currently available knowledge. We recommend exercising caution and conducting further research as necessary before making any decisions based on this information.