Legacy EDR. Yes, that’s one thing • The Registry
Sponsored Thirty years ago, the industry created Networked Antivirus (NAV), which later evolved into Endpoint Protection (EP), managed using Endpoint Protection Platforms (EPP). More recently, that era has faded as Endpoint Protection and Response (EDR) and Managed Discovery and Response (MDR) services have become the industry standard.
Now, in 2021, the dial has turned again and enhanced EDR has arrived, followed closely by Extended Detection and Response (XDR), which adds network and cloud monitoring to the mix.
Each generation is bigger and better than the last, supporting more types of endpoints with more integration and visibility between different layers of the security hierarchy. Having trouble keeping up? No one could accuse the security industry of lacking creativity when it comes to inventing acronyms that promise to stop the endpoint rot, a category that now includes a growing family of connected devices and mobile and not just PCs and servers.
Many organizations find themselves using more than one of these generations of security at the same time, which may include multiple versions of EDR only. But as the evolution of endpoint security has accelerated over the past decade, the likelihood of confusion has increased. What’s especially baffling is that despite all of this verbal and technical ingenuity, successful attacks continue without any slowing down.
It remains an uncomfortable fact that a large percentage of successful cyber attacks start or pivot by exploiting weaknesses in endpoint security. In the past, terminals were a constant target. This remains true despite successive generations of expensive detection and response. The problem for customers is how to assess the state of their own EDR implementation and where its blind spots and inefficiencies lie.
What has influenced every change to the EDR is the extraordinary evolution of malware, from pieces of code that could compromise Windows computers one by one, to modular platforms that can undermine entire WANs and businesses, critical infrastructure and even the governments that depend on it. The industry measures this damage by counting threats, grouping them by type (fileless malware, ransomware, keyloggers, etc.), delivery mechanism (phishing attacks, email attachments) and motivation (nation-state, extortion).
An alternative and arguably more relevant measure is to examine the outcome and scope of the attacks. Here, something is clearly wrong, acknowledges Patrick Grillo, senior director, Solutions Marketing at Fortinet, a company that relies heavily on the idea that the limitations of first-generation EDR systems can be alleviated by migrating to the EDR improved.
“There is no single technology,” he says before proposing what he calls the theory of the miracle solution of defending networks and endpoints. “By itself, an EDR system will do its job but stop at the boundaries of its job.”
The limits of EDR
The fundamental struggle has always been to define what malware is when the possibilities are endless and many threats today are hijacking legitimate apps and credentials to spread further. The NAV industry began over 30 years ago with the concept of finding patterns of code that could be turned into signatures. Polymorphism quickly undermined this even though companies like Fortinet have developed heuristics capable of detecting variations on the same theme. Fairly quickly, however, the complexity and volume of attacks even surpassed that, so EPP platforms emerged to monitor endpoints within their environment by correlating behaviors (process injection, registry keys). modified, attempts to disable AV) with unusual network communication.
Eventually, Next Generation Antivirus (NGAV) emerged as a feature inside firewalls to enforce lateral control between network segments. Some of these detections worked well enough, but the principles were largely static, such as the app and file allow / block list, sandboxing, and maybe behavioral analytics.
An obvious downside was the response and the investigation; even when threats to endpoints were detected, the next problem was reacting fairly quickly and determining if other endpoints were part of the same attack. EDR intervened in this breach, where it still stands today. It is not always easy to explain the difference between each generation, as many earlier techniques remain in use, but the selling point of EDR systems is that they can place a single detection in a larger context that gives defenders an idea of the depth of an attack. has traveled.
This approach is probably correct. Detections are rarely isolated and propagate quickly, meaning that multiple endpoints will be involved. The chain of destruction of any attack involves several stages, each of which will leave traces – as long as defenders have the tools to uncover them. EDR, the argument goes, is a way to give this visibility to Security Operations Center (SoC) teams.
And yet, “By itself, any amount of first-generation EDR telemetry will not prevent ransomware from entering your network,” Grillo points out, aware that the depressing statistics relate to successful attacks against seemingly well-resourced companies. Indeed, many earlier EDR systems solve a set of problems by creating a new, more demanding set of alert overhead and complexity.
“Some EDR systems are likely to set off alarms on false positives, but would drive security officials mad trying to figure out what an alert meant. That’s why, explains Grillo, Fortinet uses the renowned UEBA (User and Behavior Analysis) technology acquired with ZoneFox in 2018 across its platform to eliminate background noise detection.
Vanilla EDR therefore solves the visibility problem by creating more data. But the more data you can see, the more data you have to analyze, which means not only adding machine learning to make sense of it all, but more security people to make decisions based on that results. In theory, some of this can be automated with smart rule sets, but a complaint with many first-gen EDRs has been that it consumes engineering time and skill.
Adjustments to the model emerged, such as adding threat intelligence and better remediation capabilities, but none of these could overcome the fact that even in a well-resourced organization, the complexity of EDR systems often mean that they can be difficult to implement. According to a 2020 Enterprise Strategy Group analyst study cited by Fortinet, 83% of companies surveyed agreed that effective use of EDR requires advanced security skills, while 78% agreed that their EDR projects had were more complex to implement than expected.
Another bigger problem is that the basic EDR is too slow. It can tell defenders what went wrong if they have hours, or even days, to investigate a violation. Of all the failures that catch casualties, this is probably the one that does the most damage. Additional data, threat models, and sophisticated analysis don’t do well if applied within minutes or hours of the compromise they’ve been deployed to detect and contain.
EDR marketers often struggle to fully explain how second-generation EDR is better than systems already in use, but it really comes down to the fact that vendors have finally tweaked the architecture to counter the unfolding of modern cyber attacks. as opposed to the vague and idealized concept of a threat ‘.
At the heart of this development is the “playbook,” a set of procedures organizations can use to design how their EDRs should respond in an automated fashion when they think they’ve detected something that deserves further investigation. While security orchestration, automation, and response (SOAR) platforms seem to achieve the same thing, this is higher-level analysis and automation rather than reaction. immediate. It is the EDR system that will act as the trigger indicating to defenders that there is a problem.
Ironically, after years where traditional antivirus has become ineffective, it still relies on the capabilities of a vendor’s detection endpoint client. “Efficient EDR works at the kernel level, unsigned and uses machine learning. He has the capacity to react on his own, ”explains Grillo. This allows it to end a suspicious process, delete files, or isolate that device within one second of detection, which traditional EPP and even some earlier EDRs wouldn’t do without an SoC investigation first. .
In theory, this runs the risk that false positives will inadvertently turn into a denial of service if too many machines are isolated at once, but unlike the first generation EDR, the second generation should also be able to perform a cleanup or quick restore to get a machine back up and running without disconnecting it. Alternatively, this is where third-party MDR services come into their own, allowing for remote human investigation without distracting key people from the SoC.
“Fortinet uses machine learning to shorten the decision-making process, but never to the point where machines take over the whole operation,” comments Grillo.
Increasingly, endpoint security is only part of a much larger system, which in Fortinet’s case is the company’s Fortinet Security Fabric, a larger architecture that incorporates the Endpoint security and EDR to other security areas such as cloud, firewalls and switches, authentication, SIEM, and wireless access. This type of system does not match the old proprietary security architectures that overwhelmed customers with a one-size-fits-all approach a decade ago and should ideally be able to integrate with equipment from other vendors.
The mistake made with EP and EPP was to consider endpoint protection in isolation, which could be contained as a special case. Of course, if defenders have learned one thing, it’s that the security silo approach never worked. Advocates need to see the entire network and every event that occurs within it in context, regardless of the device, service or application that generated it. This will be the test by which the enhanced EDR is to be evaluated: Does it mark the moment when networks become a complete security space and not just a series of leaky domains?
Sponsored by Fortinet