Tuesday 8 December 2020

Analyst's Problems as a Service (APaaS) - Part 3

In this blog post, we will continue our discussion of Analyst's Problems as a Service (APaaS), if you have not read the previous parts, please read them here and here. In this one we will see what are few things an analyst can do to overcome or at least have a different approach to the problems we mentioned in "The usual", "Architectural", "Logs" and "Alerts" sections.


Is there anything we can do:


What an analyst can do regarding all these issues. There are problems everywhere and it seems like there is no light at the end of the tunnel. Don’t worry, we will discuss some of the things which are in the analyst’s power that may help mitigate or at least improve the process.

1)      The first and foremost thing to help the problem of short staffing and less skilled staff is sharing. Remember the quote “Sharing is Caring”, it is true in the SOC world. If you have a decent team, then it is a great idea to have analysts gather to share something among themselves. The second thing is documentation, it is an important part of any organization and in SOC it will help reduce the training period and also help analysts to follow set policies and procedures.

2)      Automation is something we can leverage to solve the short staff issues. Like some of the simple things like writing/utilizing a tool to perform common tasks that are performed by an analyst, creating an ability in the SIEM to perform external searches like IPscan.

3)      Sometimes we need to go a little beyond our job responsibilities like if your business has a policy that you need to collect all the logs and everything else then you need to discuss with the business to understand the reason behind the policy/procedure. Often it is a simple misunderstanding of a framework, a requirement, or something which is dictating the policy. So then discuss with them and agree to move to an approach that works for you. This will improve search speed significantly.

4)      If you work with on-boarding logs, understand how you can get logs into an environment. Gather all the methods and ways. Also, finding what data sources can be correlated is very important but hard. Even the hardest and the important one is how we collect them as this will have many implications on the amount of context, we get from the data sources and what fields we get.

5)      Data is like the paid subscription you bought but never used. To get the most out of it, you have to read the content you bought, same with the data, you have to understand it and analyze it.

6)      Often logs contain things that are of little value to an organization and analyst. Do not be afraid to trim that fat. This helps the SIEM cause now you are only removing the things which you do know that provide little or no value and also dropping into analysis mode immediately after collecting them. Also, better to do this in the pre-deployment phase or a roll-out phace, this way the resources will be minimal

7)       If the data is not correlated in the logs/alerts, the best place to start is to get familiarity with the product presence in the organization’s architecture to see what other data sources the logs can be correlated with. 

8)      Sometimes too many features and data will confuse an analyst. Imagine you are in a buffet you will likely have too many options to choose from and too many items to start from still you will go in a sequence by creating a mental map of what are the items you like what you do not like, among the items you like which is an appetizer that you start with and so on. You will not just eat one or two things and call it a day. Similarly, in SIEM, you will probably have many options it is you who has to create a mental map of what you will be starting an investigation with.

9)      Also, one of the other things that can help in searching logs is, creating some pre-defined questions related to what you want to see in your logs, like determining the goal of the search. This will help in avoiding complex regex searches (sometimes you need them to get results) and also the return time of logs.

10 When analyzing techniques for detections do research on:

  • what data sources it can be applied to?
  • what data sources it will be effective with lower false positive?
  • how it will affect the overall alert queue to determine if we want to take the risk or not
  • Figure out if this technique can be stacked or layered. If it can, great. We can use the technique to the data sources and/or techniques it is related and relevant to.  The goal of this is that, if one technique fails to catch the behavior of the attack then the other stacked/layered technique will catch.


11)   Firewalls, intrusion detection systems, intrusion prevention systems, web server logs, proxy logs all consist of more of the same data with different purposes. We can group them to build a dashboard that tells a story. 

 
12)    If threat intel is creating a lot of alerts, the main purpose of threat intel feeds can be changed according to our organization. We can use threat intel feeds to allow us to add additional context to the logs. Depending on the level you want to go, we can use its underlying capabilities to enhance an organization's capabilities around logging and detection. Like we decide not to alert on the feeds but to ingest them as a way to use them for intelligence purposes. Looking for trends and then creating our detections around the observed trends. 

 

This concludes of our three-part series on Analyst's Problems as a Service (APaaS). Thank you for reading. Feedback and thoughts are much appreciated.

 

No comments:

Post a Comment

Entities to Include in your Hypothesis Creation

  Entities to be considered while creating hypothesis for threat hunting OR whilst investigating an attackers action.