Needed: Breach detection correction

There is no shame in being breached by a cyber attack -- security experts are unanimous about that. Prevention, while a worthy part of a risk management strategy, will never be 100% successful, given the sophistication and overwhelming volume of attacks.

But there is room for improvement -- vast improvement -- in the detection of breaches. A large majority of enterprises fail to detect breaches on their own -- they find out about them from somebody else, as a couple of recent reports show.

The security firm Mandiant, now part of FireEye, reported recently that while the average time it took to detect breaches declined slightly from 2012 to 2013, from 243 to 229 days (more than seven months), the number of firms that detected their own breaches actually dropped, from 37% to 33%.

The results in a report from security firm Trustwave were more encouraging, at least for the time between intrusion and detection -- it found the median was 87 days. But the ability of firms to detect malware in their systems on their own was only 29%, which Karl Sigler, Trustwave's manager of threat intelligence called, "just a horrible statistic in general."

All of which raises a couple of obvious questions: Why are organizations so bad at detecting breaches? And what can and should they be doing to improve?

Mike Parrella, director of operations for managed services at Verdasys, told Dark Reading's Ericka Chickowski almost a year ago that the poor performance was because, "businesses and government alike are filled with idiots and ostriches. People are simply not looking for a leak -- they would rather not look, not be bothered, not spend to solve the problem, and so they are not finding. They prefer to outrun their risk," he said.

Scott Koller, an attorney with the Information Law Group, was much less harsh. He said it is more a matter of not knowing than not caring. Most enterprises take steps to prevent vulnerabilities they know about, he said.

"But when they do suffer a breach, it is the result of a sophisticated attack using a vulnerability they didn't know existed. In light of that, it shouldn't be surprising why enterprises have such a difficult time detecting their own breaches."

Richard Bejtlich, chief security strategist at FireEye and a nonresident senior fellow at the Brookings Institution, is also a bit more forgiving, noting that once adversaries have breached the security perimeter of an organization, "it can be exceptionally difficult to find them. At that point, they're using VPN credentials."

But he agrees that there is a lingering misperception about how to deal with modern threats. "There is still a prevalent attitude that it's a technology problem -- that if you deploy proper equipment, that means it (a breach) won't happen," he said.

"The first thing people ask is: 'What do I do to make this go away? Who do I write a check to?' You have to realize that doesn't work."

Joseph Loomis, founder and CEO of CyberSponse, agrees with Parrella that "denial" is a major reason for the failure to detect. And from that denial proceed problems like not being open to "new, cutting-edge technologies on the market," and a failure to admit that, "their current solutions are broken or ineffective."

Anton Chuvakin, research director, security and risk management at Gartner for Technical Professionals, echoes Bejtlich's view that there is still a perception that technology will make everything secure.

"The failures are not due to lack of buying, but a lack of doing," he said. "The tool may have produced an alert, but no skilled analyst was available to interpret it, which is a people breakdown. Or, the analyst was too busy with other things to care, which is a process breakdown."

Without the right security analysts and workflows, "the boxes won't do it," he said.

What will do it? Bejtlich quotes Bruce Schneier, security guru, author and CTO of Co3 Systems. "He (Schneier) said this about 10 years ago -- it's only two words, but it's still one of my favorite things he's ever said: 'Monitor first.' That summarizes the best advice you could get," Bejtlich said.

There is general agreement among experts about that, but also that effective monitoring requires good tools and good people. Chuvakin noted that the attacker is not a machine but a person. "In many cases, an expensive tool, such as SIEM, DLP or network forensics, has already been purchased and deployed, but no equally expensive, skilled, motivated and passionate security analyst was put in front of the console," he said. "In security, we're not fighting the tools, we are fighting the people on the other end."

Koller agrees that the solution is not simply to, "buy the latest software or technical gadget. Instead, enterprises need to dedicate staff and resources to staying up-to-date in the latest security developments and patching the numerous vulnerabilities as they are discovered," he said.

He recommends creating, "a baseline metric of your systems and normal network traffic patterns and then reviewing logs and event managers for signs of anomalies or unusual activity. Compare any irregularities against the baseline."

But people have limits, according to Loomis, who said it is not possible to monitor effectively without automation, because of the "sheer number of attacks. If you're not going to automate and accept that you're going to have some false positives in there which might inconvenience people, you will be on the top of the list for being compromised," he said.

Bejtlich has demonstrated his admiration for Schneier's advice by providing much more than a summary on monitoring. His latest book is titled "The Practice of Network Security Monitoring" (NSM), in which he notes that NSM is qualitatively different from continuous monitoring (CM), which he said is a "hot topic" in U.S. government circles.

The focus of CM is to find vulnerabilities and patch them. The focus of NSM is to detect and contain adversaries before they can accomplish their mission. The latter, he said, offers the ultimate chance to defeat attackers because, "prevention eventually fails."

Bejtlich agrees with other experts that the human element is key. In his new book, he writes that to provide NSM effectively requires a computer incident response team (CIRT), which can range from one person to dozens. The CIRT must not only collect monitoring data, but also be able to analyze it to find where and how an organization may have been compromised.

The good news, Bejtlich said, is that it is possible to decrease detection and response times drastically -- from the current multiple months down to hours or less. Often there is more time than that before an attacker starts exfiltrating data, he said. "Target had two-day window," he said.

"If we define success as the bad guys never get in, then we're defeated," he said. "But if I get to you before you accomplish your mission, then I win." And even if the response is longer than a matter of hours or days, any improvement can help. "Three weeks would be order of magnitude better than what we are seeing now," he said.

And if even that fails, Christine Marciano, president, Cyber Data Risk Managers, said companies should buy some financial protection -- cyber insurance, not only for themselves but also for their service providers or partners.

"Many larger organizations that contract with third-party service providers are also now requiring their providers to purchase cyber or data breach insurance to satisfy contractual indemnification requirements in the event of a data breach, as my agency has seen an uptick in such circumstances," she said.

Tags softwareapplicationsintrusionFireEyetrustwavecyberattacksDetection / preventionMandiantEEdata breach response

Show Comments