Mitre’s ATT&CK Security Framework

Mitre’s ATT&CK security framework was mentioned often at the Techno Security and Digital Forensics Conference. I admit that I’m not well-versed on it, yet. However, its purpose makes sense. It’s a knowledge base for Adversarial Tactics, Techniques, and Common Knowledge, which is what the acronym ATT&CK stands for. Mitre created a short video to explain about ATT&CK and why it was created:

An example of how ATT&CK is a common body of knowledge which folks are striving to keep up-to-date is with respect to identified threat groups. As of this post there is information available about 86 groups, mainly nation state actors.

One of the things I try to do in my security presentations is help folks stop thinking in just what they’re good at. For instance, in my How I Would Hack SQL Server, I point out that as an attacker, going directly against SQL Server is an option of last resort. It’s much easier to find the data I care about on a file share, an Excel spreadsheet, or some other less secure spot. Compromising accounts and then using those accounts is the easier and safer road to success. What ATT&CK details is what attackers do. Therefore, if you’re in charge of security systems or applications, looking over the ATT&CK framework will help you look at your systems more as an attacker would.

Why Security Through Obscurity Is Bad (Alone)

Security through (by) obscurity is where we try to protect an asset by hiding it. Anyone who has ever played the game Capture the Flag knows that a motivated opponent will eventually find the flag. If there were no other deterrents in place, the opponent will scour the playing area and find the flag. If hiding an asset (the flag) doesn’t work for that simple game, it doesn’t work for information security.

However, Capture the Flag doesn’t just involve hiding the flag. In all variations of the game, all teams have attackers. Therefore, part of the deterrent is acting quicker than your opposition. In a lot of variants, each side also has defenders who have some ability to discourage or thwart attackers. Even if the particular variant doesn’t have the concept of defenders, a team can be sneaky. It can overload one side, trying to trick the opposing forces that the flag is hidden over on that side. Or some of the attackers could mock act in dismay when an opposing team heads into the wrong area of the playing area, leading that team to think they are close to the flag when they aren’t. In other words, there are always additional countermeasures.

The problem in information security with a strategy of security through obscurity alone is we are making the assumption that we are smarter than any adversary with plenty of time and opportunity on his or her hands. We don’t. Therefore, we need to have the other appropriate countermeasures (controls) in order to protect our assets. There’s nothing wrong with making an asset harder to find (obscuring it). However, that can’t be our only mechanism of protection.

Security Controls: CISA vs. CISSP

When looking at the Certified Information Systems Auditor (CISA) exam, we focus on teaching 3 types of controls:

  • Preventative – keeps an incident from occurring
  • Detective – Identifies the occurrence of an event and possibly the actor
  • Corrective – Fixes things after the incident

However, the Certified Information System Security Professional (CISSP) indicates there are also 3 types of controls, but they are different than the ones listed as “types” by the CISA:

  • Administrative – These are management type of controls. They are also known as soft controls and sometimes folks call these manual procedures.
  • Technical – Also, logical. These are controls we attribute to software and hardware.
  • Physical – Controls that protect the physical environment such as guards, locks, fences, and cameras.

So what does the CISSP do with the 3 listed by the CISA? Those are called control functionalities. There are 6 of those:

  • Preventative
  • Detective
  • Corrective
  • Deterrent – A control intended to discourage an attacker.
  • Recovery – A control which returns the environment back to normal operations.
  • Compensating – A control that provides an alternative means when another control isn’t/can’t be used.

In the CISA we often talk about compensating controls but we don’t list them as a specific functionality. However, I like the CISSP breakdown a lot better. Basically, we get a matrix between the 3 types and the first 5 functionalities, with compensating controls being understood to protect an asset when the primary control is unavailable or too costly.

The key takeaway is to understand how our controls are implemented and why they work. Classification helps us better understand what we’ve got protection-wise and it will allow us to spot gaps.

On Software and OS Lifecycle Management

An Important Rule: 

If you’re trying to convince someone to your viewpoint, insulting them generally doesn’t work. If it does anything, it will be more likely to entrench them against your position. Therefore, this is something that IT professionals should avoid. 

If You Write Your Own Software

or work for an organization that’s core business runs on the software your organization writes, it is easy to miss the issues that folks who typically deploy third party solutions face. Among them is what the vendor will support with regards to operating system, core application components, etc. For instance, an IT pro may be perfectly ready to roll a Windows Server 2012 VM for a new application deployment. However, the vendor doesn’t support beyond Windows Server 2008 R2. Guess what the IT pro is going to deploy? In the vast majority of cases, that server is going to be Windows Server 2008 R2.

As a young IT pro, I often thought, “Hey, I can convince the vendor to support my configuration.” What I quickly learned, however, was quite different. As The Rock says,

“It doesn’t matter what you think!”

If you have to run a particular software package and they have requirements you don’t like, most of the time you have to accept your dislike and conform to the vendor.

When New Features Trump Maintenance

Being a security type, I always want the most streamlined, secure operating system and/or application. However, taking the time to upgrade takes away from time to implement new features unless the new features desired are in the operating system or application that is in need of an upgrade. If the new features come as a result of development or another application, you may not get the option of upgrading when you want. This is especially true when you support a lot of core applications and when business is constantly looking for new features and values new features over maintaining existing systems (even though replacement or fixes or upgrades will be substantially more at a later time). 

Trying to fight this in some organizations is fruitless. The organization will perform the upgrade (or migration to a new OS baseline) when they are forced to dos o. In this case, a bit more wisdom from The Rock,

“Know your role!”

TL;DR Version

We don’t always get to pick when we upgrade OS or application version. There are other factors in play. Don’t assume that one professional’s seeming lack of desire to do so is because of any particular reason. The only way to know is to ask, and to do nicely, without insulting said professional.

On PowerShell

I use PowerShell a lot and I write about using it to solve problems quite frequently. The fact that I can extending Powershell by interfacing with the .NET Framework or making a COM/COM+ object call means I can do just about anything I need to do in order to manage a Windows system. As a result, I consider PowerShell one of my most powerful tools.

However (you knew there was going to be a however), PowerShell is one tool among many. If you are a smart IT pro, you build your toolbox with the tools that are most appropriate for you. Yes, you take into account where the industry is as well as what your current job(s)/client(s) use. Sometimes that means you choose a tool other than PowerShell. To some, though, that sounds of blasphemy. It shouldn’t be. If you’re a senior IT professional, you should be amenable to finding the right tool for the job, even if it’s not the one you like the most. If you’re at an architect level, you had better be prepared to recommend a technology that is the best fit, not the best liked (by you).

When I think in these terms, it means I don’t build Windows system administration tools with Perl any longer. Unfortunately, even though ActiveState still has a very functional version, Perl has faded greatly from view on the Windows side. Granted, it was never very bright, but there were some big name proponents and it gave a whole lot of functionality not available in VBscript/Cscript/Jscript. That’s why some enterprise shops turned to it. With PowerShell, the functionality provided by Perl on Windows systems, the functionality missing from earlier Microsoft scripting languages, is present. So PowerShell will usually make more sense.

I said usually. I don’t automatically select PowerShell because it is the recommended standard by Microsoft. What clients am I running on? What other languages am I using? For instance, if I’m a heavy Python shop, that can be used to manage Windows systems. It may be more cost effective to write in Python than in PowerShell. If I have linux and Mac OS X platforms, I’m likely not using PowerShell. It’s all about the right tool for the job. And the right tool has more considerations than what a particular company recommends.

On Automation

I’m a big fan of automation. I’ve been in IT for 27 years now. One unchanging rule during that time is there is always more to do than there is time to do it. Automation helps close that gap. And when I can automate something, I can do more than peers who can’t. That gives me a competitive advantage. So, three cheers for automation. 

However, the reality is that a lot of administration is still manual. It may sound clever to say that if it’s not automat-able it’s not something you want a part of or that you’re not a player in some space because you don’t automate. But that’s not reality. 

For instance, people can choose to use the cloud and not automate. One reason that the cloud was advertised in the first place was to reduce on-premise costs. You could move to cloud servers and shutdown your costly datacenter and save. You didn’t have to change your day-to-day activities and you would still likely save. That’s not always true, as some startups have shown the math of switching to their own servers when reaching a certain capacity point. But that’s not the point. The point is you should be able to use the cloud even if you aren’t going to automate. 

It may not be as efficient or as cost-effective, but it still should be doable. There may be other business drivers that prevent IT from embracing automation. In the real world, that happens. It happens a lot. There are a finite number of resources. And if business determines that you as a resource would be better spent building out something new rather than automating something existing, then you are building something new. That’s reality. 

So when I hear about a new technology like Nano, I can like it without jumping on the automation bandwagon. Look, you just told me it’s compartmentalized and there’s a lot of surface area removed, even when compared to Windows Server Core. From a security perspective, I am doing a happy dance. I agree that automation makes it better. But just because your vision is automation, automation, automation, doesn’t mean it is everyone’s. And when there are other factors to consider, they may be right for what they are trying to do.

Trust No One Implicitly

At the Charlotte BI Group meeting last night, one of the questions I was asked after I gave my talk on Securing the ETL Pipeline was this:

“So you’re basically saying we should trust our DBAs?”

My response caught several people off guard:

“No, I’m saying to trust no one. Not even your DBAs.”

That received more than a few raised eyebrows. I went on to explain. I have two simple reasons to make this statement:

1) The difference between a trustworthy and untrustworthy employee is one life event.

Your DBA gets hammered in the divorce settlement and is now looking at barely scraping by. He or she has access to data that can be sold, and sold for a lot of money because (a) there is a lot of it and (b) it’s verified. You don’t think temptation is going to change a few folks’ behavior? Instead of divorce, substitute bankruptcy due to medical bills (especially if said person lost a loved one after all those bills) or a drug habit that becomes consuming.

A point I made along these lines is we often don’t know the personal lives of our co-workers, so it’s not a given that we’d catch such things. After all, a pilot who didn’t want to lose flying status was able to hide that he was shopping around doctors and we know how that ended up.

2) It might be your employee’s ID, just not your employee.

The Anthem hack tells us all we need to know on this topic. A telling quote from the article:

“An engineer discovered the incursion when he saw a database query being run using his credentials”

Trust No One Implicitly:

As #2 points out, even if you have trustworthy employees, you still have the case where an attacker can get in and steal data. Even though you trust your employees, you need to have controls in place that performs checks like you don’t trust them. That was my point last night. It’s no longer a matter of if an intruder is going to get in. It mostly definitely is now when and for how long.

Why You Shouldn’t Skip the Infrastructure/Security Architecture Review

It’s not usual for development projects to undergo an architecture review, but too many times these reviews consist only of developers. There are no server or network folks, much less any DBAs or security personnel. When I’ve brought this up to some of my developer friends, they wonder what the issue is. After all, developers have a good grasp of how things work, right?

If you’re an infrastructure type, you’re probably chuckling right now (or shaking your head sadly). If you’re a developer, let me ask you a very basic question, “Do you think the admins who support you understand everything about the code you write?” The obvious answer is, “No,” they don’t. They are very smart people but you’ve spent years learning to code, working through design patterns, finding and solving bugs, and a whole lot more about writing code that infrastructure folks don’t even think about. 

Infrastructure folks also spend years specializing in what they know. A lot of what we know don’t even enter into the realm of the developer. Most developers just don’t encounter the problems and issues we do because we solve them before they do. For instance, a recent install comes to mind where the installation was having an issue because of a hardened configuration with respect to the NICs. 

If you just thought, “What do you mean, hardened NIC configuration?” you just proved my point. If you do know what I mean but you don’t deal with that sort of thing regularly, you’ve also proved my point. If you do think about those things regularly but are quite aware that most developers don’t, you probably see my point. Infrastructure folks have specialized knowledge to bring to an architecture discussion that is not in the wheelhouse of most developers. This knowledge, if the infrastructure people are absent, is also absent. 

Infrastructure folks aren’t looking to throw roadblocks up on projects out of malevolence. Okay, MOST infrastructure folks aren’t looking to do that. What we are looking to do is doing our best to ensure that systems going in or being updated are as secure as possible and perform as well as we can make them. We want to minimize risk for the organization. We want to help the organization make the most out of every IT investment. Sometimes that means we challenge back. We see an issue or an area for improvement. Of course we are going to bring it up. 

The later in the project where we get a look at what’s going in, the harder and more costly it will be to the organization to try and make things better. If it’s a showstopper, there may not be time to avoid everything coming to a screeching halt. That’s why it’s important to get the infrastructure folks involved early and keep them involved throughout the whole project. It’s harder, yes, but it’s also what is best for the organization.

More on that Cyberwar

As a follow-up to my post on being at war, cyberwar:

State Department Hacked

If the experts are correct, this trend is only going to continue. Reading the article and others on the same situation, they all note that the unclassified email had been hacked, but not classified. That’s a bit of good news, but it’s still not all that great. There’s a lot of useful information in unclassified email, especially for a department like the State Department.

 

SQL Server Security Benchmarks

If you’re not familiar with the Center for Internet Security, here’s the organization’s mission statement:

The Mission of the Center for Internet Security is to enhance the security readiness and response of public private sector entities, with a commitment to excellence through collaboration.

CIS produces consensus-based, best practice secure configuration benchmarks and security automation content, and serves as the key cyber security resource for state, local, territorial and trial governments, including chief security officers, homeland security advisors and fusion centers.  CIS provides products and resources that help partners achieve security goals through expert guidance and cost-effective solutions.

That consensus-based part means it’s mostly community-sourced. That means if you work on a product with a security benchmark, you can contribute. I bring this up because there are security benchmarks for SQL Server available for download and we are always looking for knowledgeable folks to contribute their expertise. This link is to the released version of the benchmark for the relevant SQL Server versions.

Not only are the finalized release versions of the benchmarks available, but we also are actively working on the benchmarks all the time. As a result, the next version of each benchmark is typically available for comments and proposed changes as a draft. The more knowledgeable folks contribute, the better we can make these benchmarks, which hopefully results in more secured SQL Servers around the world.

Also, once a product version has been out long enough, we start a benchmark for it, too. That means we’ve begun the security benchmark for SQL Server 2014. We’d love contributions from the community to make this a solid benchmark with its 1.0 release. If you have the time and experience working with SQL Server 2014 security, please take a look. The current draft is a copy of the 2012 one, so there are definitely changes to be made. Thanks!

 

Previous Older Entries