Making the Move Towards Continuous Monitoring

Last week’s shellshock vulnerability alert sent IT organizations around the world scrambling to scan their environments, patching servers at a fever pitch to keep attackers out. Just days earlier, The Home Depot, the United States largest home improvement retailer, disclosed a massive data breach with over 50 million customers credit card information compromised due to a POS malware infestation. Earlier this month Jimmy John’s, a gourmet sandwich shop reported a large compromise at nearly 1/3 of their stores involving their payment system. Then there is the Target breach, where millions of customer’s had their payment information compromised and sold on the cyber underground, bringing awareness of the real threat of cyber attacks into the public discussion. I could go on and on talking about breaches and vulnerabilities but my point in mentioning these high profiles cases is two-fold.

First, as we’re seeing, there is no foolproof way to protect your environment from breaches and compromises. It is cliché at this point but still rings true: “it’s not a matter of how; it’s a matter of when”. We all know it will happen, the key is to be able to react quickly. To have the visibility and people you need to make informed decisions and react swiftly.

Second, and potentially more concerning, is that in the case of data breaches especially, were there warning signs that were missed? Could these organizations have saved themselves from the massive cost, both in money and reputation, associated with these breaches? The answer in most cases is “yes”. The problem is that far too often organizations do not have the budget and/or staff required to proactively seek out these warning signs, even when the data is fairly easy to access.

At the recent EMEA Gartner Security Summit, Neil MacDonald gave a compelling presentation citing the need for organizations to adopt a Continuous Advanced Threat Protection approach to security. MacDonald presented an approach that, at the core, consisted of continuous monitoring and analytics. This represents a sea change from the security frameworks of the past that were generally built around an incident response model. The frameworks of the future, according to MacDonald, will provide continuous response. So you might ask, that sounds interesting but where do I begin?

The first step towards a continuous monitoring and analytics approach to security can be an easy one. While all organizations are in different phases of security maturity, there is one thing all organizations have in common – log data. From servers to web applications and everything in between, logs are being generated constantly. This data, potentially rich in actionable intelligence, is a great place to look for warning signs and begin to move away from point in time incident response to a more agile continuous response model. When it comes to log management there are three key components required that feed into a process, technology, people, and process.

There are many options when it comes to log management technology. For the organizations with security engineers with advanced capabilities and large budgets you have tools like Splunk. If your organization has both the resources and time to invest in building a log management process around a tool like Splunk you can gain significant insights about your environments.

For most organizations, however, deploying advanced technology that requires specialized expertise in-house isn’t realistic. Fortunately there are many other options that require little to no log expertise and can be deployed quickly. Regardless of the path you choose to achieve continuous monitoring it will be important that you have a clearly defined process, both from a technology and people perspective, that can scale as your business grows. Here are a few guidelines that you should consider when weighing your options:

SIX STEPS TO EFFECTIVE LOG ANALYSIS AND CONTINUOUS MONITORING

STEP 1: DATA CAPTURE

Your process will always begin with data capture. It is important that when developing this phase of the process you account for the different applications, operating systems, and environments where your data will be coming from. Your technology should allow you to collect, aggregate, and process log data from throughout your IT environment. Ideally with lightweight deployment options, you should be able to capture log data from operating systems, applications, databases, security products, and a host of other tools, in a matter of minutes.

STEP 2: PROCESSING

Once your data is collected, you need to process it. By processing it I mean your technology should take those disparate log data in different formats and normalize it to enable easy searching along with the rest of the process. Once you begin to collect and process log data you will quickly see that storage will become a key component of your framework. Depending on the size of organization you could be generating Gigabytes of data daily so make sure you account for the proliferation of data that will occur.

STEP 3: CORRELATION & INCIDENT IDENTIFICATION

Correlating disparate log events will allow you to hone in on any issues that require investigation. To effectively complete this phase you will need a library of security correlation rules to analyze and identify security incidents. This library will require constant care and feeding to ensure that the rules and threat intelligence is up to date as the global security environment evolves.

STEP 4: SECURITY ANALYST INVESTIGATION

Ideally, as incidents are identified you will have security experts ready to validate the incidents. These experts will need access to additional threat intelligence in order to validate the incidents and set the appropriate priority level. Remember, this phase of the process requires staff around the clock in order to achieve continuous monitoring.

 STEP 5: ESCALATION & RESPONSE

Now with validated incidents the next step in the process should be driven based on the severity of the identified incident. A general framework for actions by severity level is shown below:

  • Low Priority: Many times low priority incidents are nothing more than “Internet noise”. Typically these events should be logged into a data store, and available via reports showing the status and trending.
  • Medium Priority: These incidents require closer observation and continued monitoring, but don’t necessarily rise to the level of a real-time response.
  • High Priority: This is where your security experts earn their keep. Your experts should actively investigate these incidents and develop a response in kind as quickly as possible.
  • Critical Priority: These are your “DEFCON 1” incidents requiring active defense blocking and ongoing activity to secure your environment.

STEP 6: SECURITY ACTIONS & POLICIES

The final step is driven by the previous phase is where you make your security better by taking actions based on the identified incidents. This could include changes to policies, firewall rule creation, new threat signatures added to your security content, among others.

THE PATH AHEAD

With today’s release of the Alert Logic ActiveWatch for Log Manager managed service you can gain insights through continuous monitoring without adding staff and the associated expense of a managed a solution in-house. Built on the Alert Logic cloud-based log management solution, Alert Logic Log Manager, ActiveWatch for Log Manager is a dual purpose 24×7 continuous monitoring managed service that identifies security issues from log data as well as meet the requirements for daily log review mandated by many regulating bodies, such as PCI DSS. With a purpose built correlation engine, this managed service uses both threat intelligence and correlation rules to generate security incidents that are then reviewed by a team of security experts in the Alert Logic Security Operations Center (SOC). These security experts will validate the details of the incident and provide detailed mitigation recommendations to customer’s 24×7, all day, every day. By outsourcing this critical function organizations can keep their internal teams focused on keeping the business running, with the confidence that their environment is under continuous monitoring by a team of experts.

Breaches, vulnerabilities, targeted attacks, and other threats to our environments will never go away. The good news is that as an industry we do not have to sit idly by waiting to be the next victim. To learn more about this new managed service, visit here.