For decades, cybersecurity has relied on detection.
From antivirus signatures to behavior analytics and anomaly detection, defenders have tried to spot suspicious activity before it harms. But today’s attackers are stealthier than ever. They use zero-days, fileless malware, and “living off the land” techniques that blend into normal system behavior.
Detection tools generate floods of alerts, but many are false positives, and the real threats often slip through. The result: security teams are overworked, adversaries stay hidden for months, and the cost of breaches continues to rise.
It’s no wonder organizations are looking for something more effective, and they’re finding it in deception technologies.
Deception technologies flip the model.
Instead of waiting to catch suspicious activity, defenders seed their environments with decoys, traps, and lures — files, credentials, servers, or processes that look real but are never supposed to be touched.
If an attacker interacts with one, it’s a clear, high-fidelity signal of malicious behavior. No legitimate user would ever fall into a decoy. That means fewer false positives, earlier detection of lateral movement, and detailed forensic insight into attacker tactics.
Deception has evolved far beyond traditional honeypots. Today’s deception platforms are automated, dynamic, and integrated, rotating decoys, embedding them into cloud and container environments, and feeding intelligence directly into security operations.
The result: defenders can not only stop attackers sooner, but they also make the attacker’s job harder.
Every step becomes riskier; every move could lead to a trap.
Certain cyberattack tactics are bound to stick around—and that’s because they've proven to work. As such, these are the threats that our experts believe still pose a serious cybersecurity risk.
The malware threat continued to evolve in 2024, becoming more pervasive and specialized. As the number of threat actors increased, so did the frequency, scope, and sophistication of malware attacks.
A key shift was the growing adoption of modular malware designs, enabling threat actors to adapt attacks to new environments and targets quickly. At the forefront of this trend was the rise of malware-as-a-service (MaaS) platforms. These platforms have significantly lowered the skill barrier for launching advanced attacks, allowing even inexperienced cybercriminals to deploy devastating malware.
We mentioned social engineering last year, and it stands true still. Since social engineering relies on human error, it can effectively target even well-secured organizations, making it a persistent and challenging threat to mitigate.
Users can and will continue making mistakes that lead to data loss. We’ll continue to see social engineering and phishing attacks, but we’ll likely see more complexity there as social engineers make greater use of AI and similar technologies.
After all, social engineering requires crafting messages and sending legitimate-sounding emails that lure victims into clicking on a link. Instead of the typical “password reset” or “mailbox full” scams, AI will allow threat actors to become more sophisticated with their messages.
Our cybersecurity analysts continuously research, investigate, and uncover emerging threats and attack tactics. Here are some key cybersecurity trends we've observed recently that we may continue to see.
Advanced persistent threats (APTs) are highly sophisticated, well-funded groups—often state-sponsored—that target specific organizations or sectors to gather intelligence or disrupt operations.
In 2024, APTs demonstrated an increased use of proximity-based and infrastructure-specific attack methods. For instance, the "nearest neighbor attack" saw APT 28 breaching Wi-Fi networks by targeting devices physically close to their high-value targets. This physical closeness allowed the attackers to bypass certain technical security measures, essentially focusing on exploiting an environmental vulnerability.
Organizations should consider non-digital attack vectors and implement zero-trust architectures that extend to physical environments. This approach will be especially critical in sectors such as energy, finance, and government.
In 2024, Field Effect reported on a new adversary-in-the-middle (AiTM) attack, which allowed threat actors to intercept and manipulate communications between two parties, often without detection, to capture credentials and session tokens.
This interception allowed attackers to bypass multi-factor authentication and gain unauthorized access to accounts.
During our investigation, we identified a campaign where attackers used Axios-based lookalike M365 login pages to harvest credentials. Victims were directed to these fraudulent pages, which proxied authentication requests, capturing passwords and MFA codes. The attackers then used the Axios HTTP client to log into M365 accounts, effectively bypassing MFA.
Additionally, the rise of platforms like the Mamba MFA Phishing Kit, a phishing-as-a-service tool, made it easier for cybercriminals to replicate AiTM attacks. For a small subscription fee, threat actors could capture authentication tokens, circumvent MFA, and compromise M365 accounts.
The cyber insurance market has faced many challenges, most notably the difficulty of assessing and pricing cyber risk due to the lack of historical data, the dynamic and evolving nature of cyber threats, and the potential for systemic and catastrophic losses.
To ease this burden, we expect cyber insurance providers to require or incentivize their clients to undergo cybersecurity assessments as part of the underwriting process or the policy conditions. This could help the insurers evaluate the risk profile and premium of the clients, and provide recommendations and guidance for improving their cybersecurity.
These assessments can demonstrate a client’s compliance with the cyber insurance policy requirements or lower their premiums by showing their security maturity and use of best practices.
In 2024, attackers shifted their initial access focus from endpoint devices to critical network infrastructure, such as routers, firewalls, and VPN gateways.
There are a couple of reasons that this might be the case:
The ArcaneDoor campaign, in which state-sponsored cyber actors targeted perimeter network devices from several vendors, is just one example of this increase. Targeting edge devices such as firewalls, switches, and routers is popular among threat actors seeking initial access to targets of interest.
Control of these devices could allow threat actors to monitor and reroute traffic, obtain credentials that could provide access to more sensitive systems and accounts, or launch Adversary-in-the-Middle attacks.
We can’t talk about 2024, 2025, and beyond without highlighting AI.
It has been a huge year for artificial intelligence, with tools like ChatGPT and DALL-E enjoying more mainstream use with integrations into powerhouse ecosystems like Microsoft’s Copilot.
It's clear that threat actors use AI in their cyberattacks, but defenders rely on this new technology too. Here are just a few ways AI is improving cybersecurity for the future:
AI models, built from vast amounts of data, will help to identify patterns and anomalies associated with cyber threats. Learning from this and historical attack information, AI will help to detect new attacks quickly and precisely.
Defenders will use AI to study user and system behavior and establish baselines. Deviations from these baselines will help trigger cybersecurity alerts, detecting potentially malicious behavior earlier than before.
AI models will predict potential vulnerabilities and attack vectors. By analyzing historical data, they will forecast emerging threats and recommend proactive security measures. These predictive analytics will aid in the prioritization of patch management and vulnerability assessments.
NLP-based AI systems that analyze textual data such as emails, chat logs, and social media will help identify phishing attempts, malicious URLs, and suspicious content. This data will then be used to improve email filtering tools, DNS firewall products, and user awareness materials.
AI-driven authentication systems will assess user behavior during login attempts. If the behavior deviates from what's considered the norm, this will trigger additional authentication steps, enhancing security without causing inconvenience to legitimate users.
AI will help identify zero-day vulnerabilities by analyzing code and system behavior, learning from known vulnerabilities, and predicting potential weaknesses. We expect AI-driven efficiencies in quality assurance testing to help discover and remediate vulnerabilities before software is released.
Like
1
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0