Data volumes making security-log centralisation trickier: ManageEngine

Ongoing growth in security breaches have customers demanding better information about their security exposure, but most will struggle to get it without tapping into intelligent analytics platforms capable of scouring different kinds of log data for trends across cloud, mobile, and other enterprise environments, a security analytics expert has warned.

The different security profiles of those varied environments had made centralisation of log data more important than ever, warned ManageEngine director of product management Rajesh Ganesan, since companies will struggle to pick out often-subtle and hidden security trends – a necessary capability in the age of the advanced persistent threat (APT) and other stealthy attacks – without it.

“We now talk about complex, heterogeneous infrastructure,” Ganesan told CSO Australia. “The rate of data coming in on cloud infrastructure is orders of magnitude higher than what would happen in a normal enterprise setup.Today, in the age of the cloud, on a typical day a complex system can generate gigabytes of log data every day.”

That volume had put pressure on past, relatively casual forms of data analysis: “The paradigm of using a homegrown tool to manage log data is gone,” he continued.

“The data is so huge that you need automated tools for delivering any sort of security intelligence. But each system generates its own often proprietary data, and with IT infrastructure spread across public and private clouds you now have so many branches of enterprise IT.”

That complexity creates new burdens for companies not only in terms of their analytics capabilities, but in terms of statutory requirements for data retention. Each enterprise has its own requirements based on the controls in place for its vertical industry, Ganesan said, but – with PCI DSS credit-card standards requiring retention of logs for a year and the US Health Insurance Portability and Accountability Act (HIPPA) requiring they be kept for seven years – he believes the burden of not only analysing but keeping such large volumes of data is posing significant challenges for every company.

Many companies are responding to this challenge by introducing multiple instances of analytical tools, with distributed implementations of data analytics and reporting engines configured to feed log data to a central reporting and analysis console.

This approach allows log files to be kept as space-efficient flat files, while analytics databases built into security information and event management (SIEM) platforms use conventional relational databases to manage their own work and analysis.

In this architecture, SIEM agents work together to help keep up with the mountain of data being produced in increasingly diverse enterprise environments. With the right architecture in place, reporting on the ongoing security profile – whether through real-time dashboard or in retrospect for business auditors – can be smoothly managed even in the context of data-spewing cloud environments.

Read more: ​Impact of Internet of Things (IoT) on IT, Business and our Lives

Done properly, “it doesn't matter whether it's on cloud infrastructure or not,” Ganesan said. “If you look at the log data, you can tie every event back to a user – and every user action back to an event source. Any audit is much easier if you have all these data points ready to go into a report you can build on. Anybody wants to get an audit done in the most efficient, effective way – because they have other challenges to deal with.”

Tags SIEMPCI DSSManageEnginesecurity breachesHIPPAdata anlayticsRajesh Ganesan

Show Comments