Mitre’s ATT&CK Security Framework

Mitre’s ATT&CK security framework was mentioned often at the Techno Security and Digital Forensics Conference. I admit that I’m not well-versed on it, yet. However, its purpose makes sense. It’s a knowledge base for Adversarial Tactics, Techniques, and Common Knowledge, which is what the acronym ATT&CK stands for. Mitre created a short video to explain about ATT&CK and why it was created:

An example of how ATT&CK is a common body of knowledge which folks are striving to keep up-to-date is with respect to identified threat groups. As of this post there is information available about 86 groups, mainly nation state actors.

One of the things I try to do in my security presentations is help folks stop thinking in just what they’re good at. For instance, in my How I Would Hack SQL Server, I point out that as an attacker, going directly against SQL Server is an option of last resort. It’s much easier to find the data I care about on a file share, an Excel spreadsheet, or some other less secure spot. Compromising accounts and then using those accounts is the easier and safer road to success. What ATT&CK details is what attackers do. Therefore, if you’re in charge of security systems or applications, looking over the ATT&CK framework will help you look at your systems more as an attacker would.

Why Security Through Obscurity Is Bad (Alone)

Security through (by) obscurity is where we try to protect an asset by hiding it. Anyone who has ever played the game Capture the Flag knows that a motivated opponent will eventually find the flag. If there were no other deterrents in place, the opponent will scour the playing area and find the flag. If hiding an asset (the flag) doesn’t work for that simple game, it doesn’t work for information security.

However, Capture the Flag doesn’t just involve hiding the flag. In all variations of the game, all teams have attackers. Therefore, part of the deterrent is acting quicker than your opposition. In a lot of variants, each side also has defenders who have some ability to discourage or thwart attackers. Even if the particular variant doesn’t have the concept of defenders, a team can be sneaky. It can overload one side, trying to trick the opposing forces that the flag is hidden over on that side. Or some of the attackers could mock act in dismay when an opposing team heads into the wrong area of the playing area, leading that team to think they are close to the flag when they aren’t. In other words, there are always additional countermeasures.

The problem in information security with a strategy of security through obscurity alone is we are making the assumption that we are smarter than any adversary with plenty of time and opportunity on his or her hands. We don’t. Therefore, we need to have the other appropriate countermeasures (controls) in order to protect our assets. There’s nothing wrong with making an asset harder to find (obscuring it). However, that can’t be our only mechanism of protection.

Security Controls: CISA vs. CISSP

When looking at the Certified Information Systems Auditor (CISA) exam, we focus on teaching 3 types of controls:

  • Preventative – keeps an incident from occurring
  • Detective – Identifies the occurrence of an event and possibly the actor
  • Corrective – Fixes things after the incident

However, the Certified Information System Security Professional (CISSP) indicates there are also 3 types of controls, but they are different than the ones listed as “types” by the CISA:

  • Administrative – These are management type of controls. They are also known as soft controls and sometimes folks call these manual procedures.
  • Technical – Also, logical. These are controls we attribute to software and hardware.
  • Physical – Controls that protect the physical environment such as guards, locks, fences, and cameras.

So what does the CISSP do with the 3 listed by the CISA? Those are called control functionalities. There are 6 of those:

  • Preventative
  • Detective
  • Corrective
  • Deterrent – A control intended to discourage an attacker.
  • Recovery – A control which returns the environment back to normal operations.
  • Compensating – A control that provides an alternative means when another control isn’t/can’t be used.

In the CISA we often talk about compensating controls but we don’t list them as a specific functionality. However, I like the CISSP breakdown a lot better. Basically, we get a matrix between the 3 types and the first 5 functionalities, with compensating controls being understood to protect an asset when the primary control is unavailable or too costly.

The key takeaway is to understand how our controls are implemented and why they work. Classification helps us better understand what we’ve got protection-wise and it will allow us to spot gaps.

On Software and OS Lifecycle Management

An Important Rule: 

If you’re trying to convince someone to your viewpoint, insulting them generally doesn’t work. If it does anything, it will be more likely to entrench them against your position. Therefore, this is something that IT professionals should avoid. 

If You Write Your Own Software

or work for an organization that’s core business runs on the software your organization writes, it is easy to miss the issues that folks who typically deploy third party solutions face. Among them is what the vendor will support with regards to operating system, core application components, etc. For instance, an IT pro may be perfectly ready to roll a Windows Server 2012 VM for a new application deployment. However, the vendor doesn’t support beyond Windows Server 2008 R2. Guess what the IT pro is going to deploy? In the vast majority of cases, that server is going to be Windows Server 2008 R2.

As a young IT pro, I often thought, “Hey, I can convince the vendor to support my configuration.” What I quickly learned, however, was quite different. As The Rock says,

“It doesn’t matter what you think!”

If you have to run a particular software package and they have requirements you don’t like, most of the time you have to accept your dislike and conform to the vendor.

When New Features Trump Maintenance

Being a security type, I always want the most streamlined, secure operating system and/or application. However, taking the time to upgrade takes away from time to implement new features unless the new features desired are in the operating system or application that is in need of an upgrade. If the new features come as a result of development or another application, you may not get the option of upgrading when you want. This is especially true when you support a lot of core applications and when business is constantly looking for new features and values new features over maintaining existing systems (even though replacement or fixes or upgrades will be substantially more at a later time). 

Trying to fight this in some organizations is fruitless. The organization will perform the upgrade (or migration to a new OS baseline) when they are forced to dos o. In this case, a bit more wisdom from The Rock,

“Know your role!”

TL;DR Version

We don’t always get to pick when we upgrade OS or application version. There are other factors in play. Don’t assume that one professional’s seeming lack of desire to do so is because of any particular reason. The only way to know is to ask, and to do nicely, without insulting said professional.

On PowerShell

I use PowerShell a lot and I write about using it to solve problems quite frequently. The fact that I can extending Powershell by interfacing with the .NET Framework or making a COM/COM+ object call means I can do just about anything I need to do in order to manage a Windows system. As a result, I consider PowerShell one of my most powerful tools.

However (you knew there was going to be a however), PowerShell is one tool among many. If you are a smart IT pro, you build your toolbox with the tools that are most appropriate for you. Yes, you take into account where the industry is as well as what your current job(s)/client(s) use. Sometimes that means you choose a tool other than PowerShell. To some, though, that sounds of blasphemy. It shouldn’t be. If you’re a senior IT professional, you should be amenable to finding the right tool for the job, even if it’s not the one you like the most. If you’re at an architect level, you had better be prepared to recommend a technology that is the best fit, not the best liked (by you).

When I think in these terms, it means I don’t build Windows system administration tools with Perl any longer. Unfortunately, even though ActiveState still has a very functional version, Perl has faded greatly from view on the Windows side. Granted, it was never very bright, but there were some big name proponents and it gave a whole lot of functionality not available in VBscript/Cscript/Jscript. That’s why some enterprise shops turned to it. With PowerShell, the functionality provided by Perl on Windows systems, the functionality missing from earlier Microsoft scripting languages, is present. So PowerShell will usually make more sense.

I said usually. I don’t automatically select PowerShell because it is the recommended standard by Microsoft. What clients am I running on? What other languages am I using? For instance, if I’m a heavy Python shop, that can be used to manage Windows systems. It may be more cost effective to write in Python than in PowerShell. If I have linux and Mac OS X platforms, I’m likely not using PowerShell. It’s all about the right tool for the job. And the right tool has more considerations than what a particular company recommends.

On Automation

I’m a big fan of automation. I’ve been in IT for 27 years now. One unchanging rule during that time is there is always more to do than there is time to do it. Automation helps close that gap. And when I can automate something, I can do more than peers who can’t. That gives me a competitive advantage. So, three cheers for automation. 

However, the reality is that a lot of administration is still manual. It may sound clever to say that if it’s not automat-able it’s not something you want a part of or that you’re not a player in some space because you don’t automate. But that’s not reality. 

For instance, people can choose to use the cloud and not automate. One reason that the cloud was advertised in the first place was to reduce on-premise costs. You could move to cloud servers and shutdown your costly datacenter and save. You didn’t have to change your day-to-day activities and you would still likely save. That’s not always true, as some startups have shown the math of switching to their own servers when reaching a certain capacity point. But that’s not the point. The point is you should be able to use the cloud even if you aren’t going to automate. 

It may not be as efficient or as cost-effective, but it still should be doable. There may be other business drivers that prevent IT from embracing automation. In the real world, that happens. It happens a lot. There are a finite number of resources. And if business determines that you as a resource would be better spent building out something new rather than automating something existing, then you are building something new. That’s reality. 

So when I hear about a new technology like Nano, I can like it without jumping on the automation bandwagon. Look, you just told me it’s compartmentalized and there’s a lot of surface area removed, even when compared to Windows Server Core. From a security perspective, I am doing a happy dance. I agree that automation makes it better. But just because your vision is automation, automation, automation, doesn’t mean it is everyone’s. And when there are other factors to consider, they may be right for what they are trying to do.

Trust No One Implicitly

At the Charlotte BI Group meeting last night, one of the questions I was asked after I gave my talk on Securing the ETL Pipeline was this:

“So you’re basically saying we should trust our DBAs?”

My response caught several people off guard:

“No, I’m saying to trust no one. Not even your DBAs.”

That received more than a few raised eyebrows. I went on to explain. I have two simple reasons to make this statement:

1) The difference between a trustworthy and untrustworthy employee is one life event.

Your DBA gets hammered in the divorce settlement and is now looking at barely scraping by. He or she has access to data that can be sold, and sold for a lot of money because (a) there is a lot of it and (b) it’s verified. You don’t think temptation is going to change a few folks’ behavior? Instead of divorce, substitute bankruptcy due to medical bills (especially if said person lost a loved one after all those bills) or a drug habit that becomes consuming.

A point I made along these lines is we often don’t know the personal lives of our co-workers, so it’s not a given that we’d catch such things. After all, a pilot who didn’t want to lose flying status was able to hide that he was shopping around doctors and we know how that ended up.

2) It might be your employee’s ID, just not your employee.

The Anthem hack tells us all we need to know on this topic. A telling quote from the article:

“An engineer discovered the incursion when he saw a database query being run using his credentials”

Trust No One Implicitly:

As #2 points out, even if you have trustworthy employees, you still have the case where an attacker can get in and steal data. Even though you trust your employees, you need to have controls in place that performs checks like you don’t trust them. That was my point last night. It’s no longer a matter of if an intruder is going to get in. It mostly definitely is now when and for how long.

Previous Older Entries