Dealing with Auditors: Password Settings

Yet again I’ve seen an audit request where the auditor wants the DBA to show what SQL Server’s settings are for this set of information:

Facepalm pose

  • Account Lockout settings
  • Password Expiration settings
  • Password Complexity settings

If you’re dealing with an auditor who is asking for this on your SQL Server, please refer them to:

 Books Online: Password Policy (this link goes to the SQL Server 2005 BOL, when such support was introduced)

Then show them this quote (emphasis mine):

 When it is running on Windows Server 2003 or later versions, SQL Server 2005 can use Windows password policy mechanisms.

SQL Server 2005 can apply the same complexity and expiration policies used in Windows Server 2003 to passwords used inside SQL Server. This functionality depends on the NetValidatePasswordPolicy API, which is only available in Windows Server 2003 and later versions.

If they aren’t familiar with that function, point them here:

 NetValidatePasswordPolicy function

In other words, SQL Server is passing off the password check to the OS. Therefore, the settings are at the OS. This is true of every version since SQL Server 2005. Hit the same topic, regardless of version, and you’ll see something similar with regards to the text. The bottom line is that there are no separate settings in SQL Server. Furthermore,if your Windows computer is in an Active Directory domain, then most likely your OS is getting them from the Default Domain Policy, which is stored in Active Directory. In short, the auditors need to look there, not in SQL Server.

If they’re interested in seeing what SQL-based logins which are enforcing password policy/expiration, that’s a different story. Query sys.sql_logins and you’ll get that information. But as far as the settings are concerned, they aren’t “settable” within SQL Server. They come from the OS. If they still insist on seeing the settings, point them to this article and send them off to your friendly (well, until the auditor shows up at his or her desk) Windows administrator:

 How to configure password enforcement options for standard SQL Server logins

Advertisements

Update Your Audit Queries for SQL Server

I was working with an auditor today who is working through a system with an external audit agency. The external agency handed us scripts to run across SQL Server, Active Directory, etc. I took on the SQL Server scripts. Then I refused to run them. The main reason I pushed back is because the scripts were valid for SQL Server 2000, but they aren’t for SQL Server 2005 and above. In other words, we could like fine based on the scripts and not be fine. The scripts don’t adequately check the controls on the system with regards to SQL Server. So what caught my attention?

No sys.server_permissions

CONTROL SERVER has been well documented by now. It gives the equivalent permission to being a member of the sysadmin role, with a few exceptions. If your audit script isn’t checking that, your audit script needs to be updated. How do you check for it? You query sys.server_permissions.

No sys.database_permissions

The script in question used sp_helprotect. In SQL Server 2000 this was your dump permissions stored procedure. It’s included in later versions of SQL Server, but it’s only there for backwards compatibility. That means it returns information based on what was available in SQL Server 2000. What does it miss? Well, among other things:

  • All permissions at the schema level
  • Some permissions at the database level

Those are kind of important. If your audit script isn’t checking against sys.database_permissions, your audit script needs to be updated.

Using sp_configure without turning advanced options on

There are some things that you can disable now using sp_configure. For instance:

  • xp_cmdshell
  • Use of SQL Mail

If you don’t turn advanced options on, you can’t check to see if these things are enabled or disabled. Turning this on and then running sp_configure is simple:


EXEC sp_configure 'show advanced options', 1;
GO

RECONFIGURE;
GO

EXEC sp_configure;
So simple that if it’s not in your audit script, it should be.

Personally Identifiable Information (PII) and Data Encryption

Enigma GermanHitting close to home, SC Governor Nikki Haley noted that after the SC Department of Revenue breach was reported, that the IRS didn’t require the data to be encrypted:

“As I am sure you are aware, an international hacker recently breached the South Carolina Department of Revenue’s computer system exposing the personal information of all electronic tax filers in my state,” she wrote to IRS Acting Commissioner Steven T. Miller. “While this incident was entirely caused by a malicious criminal hacker, the investigation of how this breach occurred has unfortunately revealed that the IRS does not require encryption of stored tax data, only transmitted data.”

The IRS tried to refute this, saying they “had a long list of requirements.” It turned out she was correct, and you can see the evidence in IRS Publication 1075, which states (for data at rest):

While encryption of data at rest is an effective defense-in-depth technique, encryption is not currently required for FTI while it resides on a system (e.g., in files or in a database) that is dedicated to receiving, processing, storing or transmitting FTI, is configured in accordance with the IRS Safeguards Computer Security Evaluation Matrix (SCSEM) recommendations and is physically secure restricted area behind two locked barriers. This type of encryption is being evaluated by the IRS as a potential policy update in the next revision of the Publication 1075.

Then we see that the Department of Homeland Security had a data breach, and once again, PII data was taken. That PII data was unencrypted.

As a result of this vulnerability, information including name, Social Security numbers (SSN) and date of birth (DOB), stored in the vendor’s database of background investigations was potentially accessible by an unauthorized user.

This begs the question, “When are we going to get serious about encrypting personal information?” I know there are challenges to doing so, especially for older systems. Also, when it comes to SQL Server, not everyone can afford Enterprise Edition licenses where you get Transparent Data Encryption starting in SQL Server 2008. I also know there’s concern because encrypted files don’t compress well, so when you take a backup and it’s encrypted and then try to pass it over to a system that dedupes data and attempts to compress, you don’t get such good results. However, at some point we’ve got to accept that this is part of the cost of doing business and look to do a better job of encrypting data.

With respect to SQL Server we have options. Among them:

  • Built-in encryption within the database (at the “field” level)
  • Transparent Data Encryption at the database level (which also ensures native backups are encrypted)
  • Third party backup products that support encryption
  • Third party encryption products that offer similar capabilities to TDE

It’s a matter of identifying what needs to be encrypted, the proper solution, and accepting the cost. To say it’s not required, the excuse used by Haley, is just that, an excuse. As IT professionals we should press for the right solution. I realize this is ultimately a business proposition. However, if we’re not bringing up the discussion, we’re part of the problem.

Why Anti-Virus Offers Limited Protection

Sitting in the first Keynote for the 2013 Techno Security and Forensics Investigation Conference, I was not surprised to hear Kevin Mandia say that in their recent investigations, they had found anti-virus installed and working with the latest definitions. Yet these systems were still infected with malware. In short, AV had failed to stop the malware.

So why is there a corporate and personal insistence on having anti-virus installed, especially in the Windows world? It’s probably because we value its protection too heavily. To be blunt, AV stops the easy attacks. It typically stops the attacks that we’ve known about, attacks that will get you if you have no protection at all. So while AV has some worth, it’s not the magical full plate armor that too many folks think it is. Why is that?

It’s Mostly Signature Based

It used to be that we saw AV definitions update about once a month. Then that shortened to about every two weeks. Then it was weekly. Then it was even daily. However, nowadays a lot of organizations are pulling AV definitions down more often than once a day. I know some organizations have hourly checks to pull the latest definitions and get them distributed to their AV clients. Why are so many updates required? Quite simply, because AV is still primarily signature based. What that means is that the AV companies are effectively “fingerprinting” the malware. Those “fingerprints” are what are used to detect the malware. Those are what we mean by AV definitions. Therefore, as you discover new malware, you have to develop new definitions because you’re going to have to include their “fingerprints.” And every once in a while there are false positives.

This alone should cause you to pause. What if the malware isn’t known about, will there be definitions? Obviously, there won’t be. Therefore, if we’re speaking of brand new or very tightly targeted attacks, AV won’t detect the malware based on signatures. Therefore, the malware gets in and it runs, most of the time. That’s why no AV gets a 100% rate.

What about Behavior Based Detection?

This is a great idea, in theory. The problem on a live system is that what an AV application may detect as potential malicious behavior is, in fact, legitimate. For instance, a lot of malware creates an HTTP connection to either grab updates or grab additional malware to bring down. To the OS, this doesn’t look very different than when you click on that link in your favorite Twitter client. As a result, AV has to be careful not to be too aggressive. Because of this, there’s a lot of latitude for malware to operate. And if you don’t think malware writers are aware of behavior heuristics in AV programs, think again.

Malware Gets Tested, and Tested Well

Unlike some code, malware gets tested before “production.” We’re past the stage of folks writing viruses that are just comical in nature. Now we’re talking about stealing money or stealing secrets. That means you’ve got professional players in the game. When you have professionals, and when you talk about the worth of the data or assets they are stealing, they want to make sure their stuff works. When it comes to folks on the criminal financial side, we’re talking millions of dollars on the line. Therefore, they test.

How do they test? Basically, they run their malware against the known AV engines and see if the engines get detected. At the very least, something like VirusTotal gets used. Therefore, the attackers already know if their malware is going to get detected. If it does, they work on it. By the way, when they see definitions pop up for one of their malware tools, they know to deploy a new tool. In this regard, AV actually works against those trying to do the investigations as to who is hacking and what they’re up to.

Not Everything Makes a Definition

Let’s say a particular piece of malware gets submitted. Does that mean it automatically gets included in the AV definitions? The short answer is, “No.” Definitions are determined by a whole host of factors. Among them is if it’s clearly identified as a threat. Why do I say that? Well, take Stuxnet, for instance. It was discovered in 2010 but believed to have been created in 2009. Only that’s not the whole story. It seems that Symantec found traces of it way back in 2005.

Another thing to remember is the sheer volume of malware submissions in today’s world. Before, researchers had time to hand check every submission. It was still practical to do so. Nowadays submissions run through a filter of automated checks. Why is that? I have seen estimates in the neighborhood of 250,000 submissions a day. I know in 2009 the number was about 50,000 per day. Now, some of those are already known. Others aren’t malware. So it’s very possible something with very new behavior, especially if it doesn’t appear to be aggressively malicious, can be missed.

As to what other factors there are? Part of it is impact and spread. We know this because when we’ve had major outbreaks, we’ve seen AV companies rush to get signatures out the door. With targeted attacks, there aren’t many samples and there aren’t many infections. It may be harder to get a signature together to put in a definition. And that means the malware stays viable all the longer.

So Should We Trash Anti-Virus?

No, it still serves a purpose. Just realize its limited effectiveness. It’s going to stop most known and common threats. It’s not going to stop a targeted threat. It’s not going to stop a threat from what we’re calling Advanced Persistent Threats (APTs). It’s not going to stop a brand new threat that the AV companies haven’t had time to analyze. Therefore, AV is only a small part of the overall security picture. Too many organizations and people rely on it as a major player in a security response. It shouldn’t be. Those days are gone.

From the 2013 Techno Security Conference – Cloud Computing and Digital Forensics

I’m processing through my notes for the 2013 Techno Security Conference, which is finishing up today with post-cons. Of all the sessions I attended, the best one was Cloud Security and Digital Forensics, presented by Ken Zatyko. This was actually a replacement talk, because the talk I wanted to see the most was canceled. However, that’s what serendipity is all about, right?

When it comes to the physical work, forensics generally works on Locard’s Exchange Principle. The catch with cyber crime is that there doesn’t have to be physical contact. So are there still traces? Zatyko said yes, he believes there should still be, but you can’t bet that they’ll be on the final system, the one we’re most concerned with. But what if we expanded out past that final system?

“Artifacts of electronic activity in digital devices are detectable through forensic examination, although such examination might require access to computer and network resources involving expanded scope that may involve more than one venue and geolocation.” – Zatyko and Dr. John Bay, 2011

This should also apply to cloud computing. Too much is focused on the back-end data or the client piece used to connect to the cloud. This falls in line with traditional digital forensics which focuses on that single desktop, laptop, or mobile device. As devices and systems become ubiqitous and since storage is so cheap, digital forensics is already dealing with how to deal with all that other data. It’s having to look beyond the single desktop. Digital forensics with respect to cloud computing needs to do so, too. The basics still apply, though:

“The application of computer science and investigative procedures for a legal purpose involving the analysis of digital evidence after proper search authority, chain of custody, validation with mathematics, use of validated tools, repeatability, reporting, and possible expert presentation.” – Ken Zatyko

Which leads to the following list of what you need to do credible digital forensics for Cloud Computing. Note, none of this is any different than traditional digital forensics:

  • Search authority
  • Chain of custody
  • Imaging/hashing function
  • Validated tools
  • Analysis
  • Repeatability (QA)
  • Reporting
  • Possible Expert Presentation

With respect to Cloud Computing, here are portions of the architecture that we need to consider further because they probably aren’t being considered enough:

  • Cloud Scheduler/Manager – software that logs and manages usage, etc.
  • Cloud Instance – hypervisor and virtual machines themselves

One of the things that needs to be pointed out is that with multi-tenancy, the possibility of a situation like Moonlight Maze is real.It’ll be hard to detect where the real attacks are coming from and by being inside the system we can probe other tenants in the system.

So where does Zatyko think we can find traces? These are straight from my notes and are in outline form:

  • Cloud Client
    • Traditional forensics
    • ISP records
  • Cloud Scheduler/Manager
    • Logs of inbound connections, cloud instances and physical hardware used to service clients
    • Consumer account information
    • Internal cloud service provider audit logs
    • Authentication and access logs (control granted to customers for use of applications and services)
  • Cloud Instances
    • Traditional forensics
    • May require remote acquisition and credentials
  • Hypervisor
    • Dependent on type of hypervisor (bare metal vs. hosted, etc.)
    • Log files detailing cloud instance behavior
    • Cloud instance memory and disk state
    • VM introspection data (if available)
  • Administrative Domain (Domain 0 – management domain)
    • virtual disk images
    • cloud instance memory
  • Cloud storage
    • Data stored by a cloud instance
    • Physical Systems
    • Traditional acquisition of disks and memory

He also gave some attack vectors to Cloud Computing:

  • traditional attacks against cloud instances
  • supply chain attacks against firmware and hardware of physical systems
  • virtualization break-out attacks
  • traditional insider threats within the consumer’s organization
  • malicious insiders at the cloud provider
  • malicious cloud providers
  • foreign espionage facilitated by offshore hosting and storage

And some challenges with respect to performing digital forensics:

  • low technical and legal expertise
  • location of data
  • proliferation of endpoints (time lining, logs formats, deleted data)
  • evidence segregation (concealment, decryption)
  • data redundancy
  • correlation of chain links
  • SLAs
  • tenant rights, evidence admissibility, and chain of custody

Notes from 2013 Techno Security Conference Tuesday Keynote

There’s enough from this morning’s 2013 Techno Security and Forensics Investigation Conference to split into multiple blog posts. I’ll focus this one on the keynote that was given this morning. The presentation was Protecting the US Financial System from Transnational Criminals and it was given by A.T. Smith, Deputy Director, US Secret Service (USSS).

Some interesting statistics with respect to US currency:

  • Currently over $1T US Federal Reserve Notes (FRN) in circulation worldwide
  • 2/3 of FRN in circulation outside the US
  • 75% of all $100 notes outside the US are counterfeit
  • Overall, 1/10,000 of those notes are counterfeit

With respect to data, 1,1116 TB of data (yes, over 1 petabyte of data) was captured in seized media in 2012. Keep in mind that the US Secret Service strategically focuses on criminal financial violations. Cases that involve national security get turned over to the FBI.

Again, I definitely see this as an area where data professional can get engaged. Large amounts of data… that’s what we do.

One of the problems we face with regards to criminal financial violations: In Eastern Europe it is “in fashion” to be a young hacker. Many hackers are out of work / make little money. Example of this hacking culture: Dmitry Golubov. He was busted, only sentenced to 6 mo (due to “connections”), and didn’t serve most of it. He then ran for office and won. And he founded a political party. In short, successful hackers in Eastern Europe are the rockstars. Magazines follow them. They date models, etc. So it’s easy to understand the motivation behind these young hackers.

This is hard to compete against. It’s like why the lottery is so successful.

In all, 96% of data targeted – payment card info, PII, email addresses. 73% of the victims are in the US. The attacker numbers show that Romania is a hot spot. However, in second place is the United States:

  1. 33.4% Romania
  2. 29% United States
  3. 14.8% unknown
  4. 4.4% Ukraine
  5. 3.9% China

When the USSS looks at financial cyber crime, here’s the hierarchy they see:
Malware developers -> hacker -> major dump vendors

As a result, the US Secret Service targets malware developers first.

Some of these cases are big. For instance: BadB case – Vladislav Horohorin.

Other cases:

In short, we’ve got to get better. They’re making money hand-over-fist.

Notes from the First Day of the 2013 Techno Security Conference

The Techno Security & Forensics Security Conference is held in conjunction with the Mobile Forensic Conference each year in Myrtle Beach, SC. Both conferences are primarily geared towards forensics types. Each of the main days (there are pre and post-con classes like most conferences) starts with a keynote speaker. Today’s was Mandiant’s CEO, Kevin Mandia. Some of the things I wrote down from the keynote:

State of the Hack Keynote:

Mandia comes from a military background (USAF officer). When he looked out at the private sector and thought about the types of attacks the military was fending off, he felt corporate America was really a bunch of “sitting ducks.” Another analogy he used was as an “Ultimate Fighting Champion mugging your grandmother.” That’s part of what led to the creation of Mandiant. Based on how the rest of the talk played out, there’s some credence to his position.

At this point, he feels that small and medium size firms don’t have a chance. They either have little or no IT security budget and they have very vulnerable end users. Small and medium sized firms aren’t off the radar for attacks by APTs, either. For instance, he kept citing that the Chinese were funneling some of their attacks through a florist business.

In other words, we’re all potential targets. We’re bigger targets if we’re educational institutions due to the difficulty locking things down.

What is defined as victory has slipped from when I was primarily a security architect. We believed security should be with the view of when someone gets in, not if. However, we still tried to “gear up” to prevent the if. Nowadays, forget it. Victory is defined based on the time you can detect the threat and close it down. If you can do it in 4 hours, you’re a superstar. If you can accomplish it in a couple of days, you might still be okay. The reason for his view of this was that the attackers typically take a little time before getting the breach and beginning active steps by a human. This isn’t always the case, however. Mandiant has seen a Chinese response from break-in to a human operative actively attacking in as little as seven monites. The best he has seen is a particular defense contractor that is averaging 30 minutes.

This made me think about how many companies are postured to respond in such a way.

Another thing he pointed out is that most of the breaches they investigate the firms have pretty good IT security. Most are meeting compliance requirements. For instance, every time in recent memory they’ve gone and noted anti-virus has been up to date (how many of you have seen that on an audit). The takeaway from that is current endpoint protection is effectively useless.

The reason it’s all useless is that the method of attacks have changed dramatically. It used to be that we went after technical exploits. I think about it like sappers tunneling under the castle wall. We were looking for a technical vulnerability to exploit. Nowadays it’s almost all human. Attackers are researching who are in these companies and then sending targeted emails, spear phishing, which either contain malware or links to malware or sites which take advantage of browser vulnerabilities. All it takes is one. That’s why the security posture has changed.

However, it’s not all malware. They’re grabbing passwords. They’re making use of social engineering. There’s a lot of “no tech” mechanisms being employed. For instance, at one technical company, 51 computers were compromised but only 12 had malware.

Once again this shows us that humans are the weakest link in security. Mandia didn’t have good answers that would be acceptable to industry on how to solve this. No one does at this point. Yet another reason for the current security posture.

Convergence of Digital Forensics and eDiscovery:

This is the other talk that hit me as significant and not so overly technical that it’s of value to data and standard IT professionals. This was a panel discussion.

Basically, digital forensics was built around the model of trying to examine thoroughly a desktop or a couple of laptops. If you want evidence of it, go look at the more recent computer forensics books. Basically, how to sniff out every detail on a particular Windows or Max OS X system. This isn’t going to work. Just as the amount of data has exploded in industry, it has exploded everywhere. Case in point: one investigation by one of the panelists required the investigation of all the assets of 1,500 data custodians. You can imagine how much data that is. Another panelist cited a case that really rocked him and made him realize the shift was when he was asked to examine 11 TB and 65 mobile phones. If you go to the “bits and bytes” level typical of old school computer forensics, you’d spend a lifetime, literally, pulling out all the useful information.

The solution is to bring eDiscovery tools to bear. This allows us to filter and pare down the amount of data. It also means being able to bring in SMEs to view the data. For instance, a particular email may be meaningless to a forensics investigator. However, the key word or data profile triggered. Upon looking at it the forensics investigator realizes it’s HR related and can bring in an HR resource to analyze the information. This leads to getting to the right information, but using different methods.

Using eDiscovery doesn’t mean losing chain-of-evidence or anything else that would compromise the ability for evidence to be admissible in court. If proper procedures are followed, data shouldn’t be altered. MD5 and SHA1 hashes should match up. However, when you’re faced with hundreds of TBs of data, you now have a fighting chance to get at what you need in a timely manner. If we don’t go down this road, nothing comes to trial, whether civil or criminal, and the costs are horrendous.

As a data professional, this got me to thinking that this is an area where our expertise, filtering and working with data, would be of great benefit. Unfortunately, I haven’t run across many who have feet in both fields.

I’m definitely looking forward to tomorrow.