Wednesday 29 April 2009

SAM Cracking using Ophcrack and Encase

EnCase, with the decryption suite is OK at determining the password hashes for user accounts, but only if you have it and only for the currently used SAM/ SYSTEM file pair. PRTK can be used for any pair you want to give it, but takes a few steps to add them and often takes quite a bit longer to crack the passwords. It's certainley not a bad piece of software, but I've managed to set up a virtual machine of a suspect computer, boot it to a live CD of ophcrack and crack the passwords whilst waiting for PRTK, such is the advantage of using rainbow tables. Even so, it's a bit of an effort doing it that way and their Windows based version is just as capable so I now use that.

If I was only interested in the users and passwords at the time the computer was imaged then that would be fine. I'd pick my tool, crack the passwords and off I go. But that's a bit one dimensional. Often I'd want to know if there were any other accounts in use in the past, whether the passwords had always existed or not and what they were. The current SAM/SYSTEM pair just doesn't give me that history.

Thankfully, the Restore Points in the System Volume Information folder hold past pairs of files under the names _REGISTRY_MACHINE_SAM and _REGISTRY_MACHINE_SYSTEM. These can be extracted and cracked just like the current files using your favourite cracking software. If you are blessed with many pairs covering a long period of time, you could have lots of good information but going through the process of extracting them and adding them could take a long time particularly if, like with ophcrack, each pair must be renamed to sam and system. It may even be a fruitless exercise if, as is often the case, there are no changes to accounts or passwords between restore points.

I've written an EnScript that will process selected files for sam/system file pairs, including those from restore points, and send them to ophcrack from within EnCase. It doesn't automatically start the decryption, for a number of reasons:
  1. It spawns a new instance of ophcrack for each sam/system pair (which could be a fair few)
  2. It's always worth removing the accounts you're not interested in, such as HelpAssistant
  3. It's always worth configuring which rainbow tables you'll use (there's no point trying to crack an NT hash when there's an easier LM hash available)


The script requires that the path to the opchcrack exe and the root rainbow tables folder is set and only the SAM file needs selecting (although you could always blue check everything and see how many windows pop up!).


Further info on Restore Points in general can be found at Mandiant's website and Stephen Bunting's page.

Trevor Fairchild has an EnScript to reconstruct the Restore Point and, of course there's plenty of information from Harlan's blog.

Ophcrack Enscript here.

Alternative clock drift calculations

Sometimes knowing the exact time of an event is key to an investigation. Often it isn't (it's all relative after all) but it's always important to check how reliable the times you see, are.

The easiest method is to check the computer clock against a calibrated clock, a GPS clock or radio controlled clock are ideal. If this is done reasonably soon after the 'incident' or seizure of the computer, the difference will give an accurate measure of how much time to add or subtract from the times seen during the analysis. If you don't have the original machine, can't get to the system clock or leave the computer off for an extended period without checking, you may have to work harder.

Firstly a note about what is seen, even when a valid time is obtained.

  • Daylight Saving- Remember that the use of daylight saving time may have changed since the computer was last used. This is normally quite obvious with a clock being about an hour out (for many of us), but it's worth checking the date of the last use of the computer to be sure, as other factors may also be in play.
  • Type of Operating System- Windows will set the computer clock to the local time so that if DST is applied, the time on the computer will be moved forward an hour. Other OSs can maintain the computer clock in local time or UTC . Dual boot systems would make things trickier still!

Event Logs

The Windows event logs can provide clues as to how accurate the clock is, or was. Stephen Bunting has some information on detecting changes in the clock settings, but it's also possible to show when Windows has synchronised with a time server.

In the System Event log, filter for W32Time as the source and look at event IDs 35 and 37. If the computer is set to automatically update the time and is regularly able to contact the time server, then the clock is likely to be reasonably accurate. The registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time contains more details of the settings.



Yahoo pages

Web pages from Yahoo contain a really handy line at the end of the page source that shows when that page was served. It looks something similar to this:

!-- p18.www.ird.yahoo.com compressed/chunked Wed Apr 17 21:20:10 GMT 2009 --


Whilst understanding that there may be a slight delay between this stamp and the creation of the associated internet history record, this provides a great way of determining the clock offset at this time. Just compare the internet history record, created from the computer's clock with the time stamp, created from the web server and you have your value. If you can compare a number of web pages from around the same time you can start having more confidence by averaging out the slight transmission delays.

Of course this relies on the belief that Yahoo's web servers are accurate (apparently 74% of web servers were within 10s for one study), but if this is combined with time stamps in other webpages (look for unix times, but test their relevance!) and other sources of evidence such as event logs or even good old fashioned witness accounts, you can start to have much more confidence in the times that you are including in your reports even if the case doesn't hinge on a split second degree of accuracy.

I've written a basic EnScript that looks in selected files for the Yahoo time stamp and displays that, along with the created date of the file in the console. It doesn't do anything fancy and the code is probably awful but I'll make it available here (note that it only works in EnCase 6.13 and later).

Thursday 7 August 2008

Web Browser Prefetching

Web Browser Prefetching A succinct description can be found from the link to Mozilla's FAQ: "Link prefetching is a browser mechanism, which utilizes browser idle time to download or prefetch documents that the user might visit in the near future. A web page provides a set of prefetching hints to the browser, and after the browser is finished loading the page, it begins silently prefetching specified documents and stores them in its cache. When the user visits one of the prefetched documents, it can be served up quickly out of the browser's cache." With this in mind, there could be scenarios where URL's are identified in internet history records which the user has not selected to visit. For this to happen there are a couple of fundamental requirements:
  1. A web page contains a prefetch link
  2. The web browser is set to act upon a prefetch link
For a quick test it's possible to use gemal's psyched site, but for a more real world example I used Google and Firefox to do a quick test. Google has, since March 2005 included the ability to prefetch the first result from a Google search which caused a few webmasters to get ruffled feathers from the fear of false hits skewing their stats (can be identified from Firefox clients with the X-moz: prefetch header). Interestingly none of the links to Google pages explaining their prefetching are working anymore. Firefox is by default set to enable prefetching and as far as I know can only be turned off by going to about:config and setting the value network.prefetch-next to False. I've not yet looked at IE or any of the additional plugins and tools that could also make use of prefetching.
I used the neat Firefox add on, HTTPFox to view the activity relating to the test.

The Test

I tried a few Google searches to see if the browser (Firefox 3) would then prefetch the first link but it wasn't working consistently.

Looking at the source for Google results page showed that a prefetch link wasn't always inserted. A bit more digging, and it appears that Google only inserts a prefetch link when the first result is a simple host name (e.g. www.microsoft.com).

I don't know when or if this has always been the case.

A search for microsoft (funnily enough) gives Microsoft's website as the first hit. Shortly after the Google results page had loaded a GET request appeared for www.microsoft.com, and then redirected to http://www.microsoft.com/en/us/default.aspx. A few times I see an aborted request, shown in HTTPFox as text/html (NS_BINDING_ABORTED). I suspect that this could be as a result of Firefox discarding the prefetch hint.

Just to confirm that this is recorded in internet history records, I did an internet history search in EnCase which showed the Google search and the subsequent Microsoft caching with no obvious sign that the Microsoft record was as a result of the prefetch and not of the user selecting the link.


Friday 1 February 2008

Lab Standards

If you were to skip briefly back to my first post I referred to a document by the European Network of Forensic Science Institutes. Whilst it's slightly dated, their best practice guide for forensic IT labs is still an excellent summary of requirements for a 'quality focussed' digital forensics lab. It doesn't tell you what tools to use, where to find the 'smoking gun' you've been looking for or how to image a hard drive but it does describe the processes you'll need to work these out for yourself and prove to another party that you've been thorough in your preparation, examination and reporting. In fact it does an excellent job of translating the internationally recognised standard for forensic labs ISO 17025 for our juvenile corner of forensic science (yes, we're not that different from the other forensic disciplines, see HogFly's blog).

But who uses it? Well without conducting a survey, I could only reason that the fact that the best practice guide follows so closely the international standard, anyone who follows this guide would have attained, or would be in the process of attaining the ISO 17025 certification. After all, why not pay the few extra pounds to get a certificate if you've done the hard work already? Well, for the UK it's just two organisations, in the whole country, who are accredited (do a search for 'forensic' and look for 'data capture'). One for just 'Mobile Phone Handsets and SIM cards' and the other also including 'Computers and Computer Media' and whilst I know that a few labs have the generic quality certification (ISO 9001) and fewer still also have the Information Security certification (ISO 27001), they both seem to skirt around the issue of standards in digital forensic labs. Even ISO 17025, a standard for calibration and testing labs, but regularly used in traditional forensics, requires skillful use of the shoehorn to make it fit. Which brings me back to the ENFSI best practice guide as an example of such a shoehorn that seems to look quite usable.

Unfortunately for the UK/ European digital forensic community, ENFSI membership is normally restricted so that wider participation in developing and promoting these standards
would be limited through this organisation. The American Society of Crime Laboratory Directors / Laboratory Accreditation Bord (ASCLD/LAB) isn't so restrictive and has a scheme whereby a lab can be accredited to a standard that includes ISO 17025 and is 'enhanced' for our specialism.

Is this the way forward then? Have I found the lab standard I've been looking for? Maybe not, but it's the best I'm aware of so I think I'll give it a shot.

Monday 28 January 2008

Date & Time stamps for files and folders



This is something I've been meaning to do for a while. Transposing the Microsoft article on what happens to the time stamps into a quick reference table seems to make sense. Of course, like when using a calculator in maths exams, you can get away without doing the reasoning mentally but I think it's a good exercise to think through the reasons for the results aswell.

Saturday 26 January 2008

Forensic tool testing

Edited 01 Jan 08 to make the ENFSI document link work, but I'm sure everyone could work out the problem anyway!

My first ever blog post so I might aswell dive straight in!

The verification and validation of tools should be one of the most important routine aspects of computer forensics as it is for the other forensic sciences but whenever I see it mentioned there's usually somewhere, a shrug of the shoulders, a half hearted attempt to convince someone (including themselves) that they do it sufficiently and regularly and a fall back position of having to balance efficiency against the (unrealistic) requirements of academics (or am I being too cynical?).

For hardware or software with limited functions, such as write blockers, there really is no excuse. Prior to purchase, as with other critical purchases checks should be made to ensure that it is fit for purpose. ENFSI have a best practice document that specifically addresses Commercial Off The Shelf hardware and requires the lab to get a maintenance agreement and some kind of assurance that the manufacturer will provide statements, certificates or other proof that it's fit for purpose should it be questioned in court.

The forensicfocus blog talks about some methodologies for testing write blockers including the all singing and dancing NIST tests and the more feasible 'Helix test'. The NIST testing is excellent in its detail but you wouldn't be buying much kit if you limited it to those already tested. Maybe, if there were an international standard for these and organisations applied more commercial pressure to the manufacturers such as described by ENFSI, we could see these critical tools tested sooner.

Once purchased though, they need to be maintained and checked regularly, just as other critical items are. I mean, we get our fire extinguishers checked, why not our write blockers? Tableau, who's products I use regularly, fairly recently released a critical firmware update which just goes to show that 'hardware' does not 'mean purchase and forget'.

As I see it, the only times a hardware write blocker needs to be checked is before first use and after any firmware upgrades. With software write blockers, where the risk of mis-configuring it is greater a limited (Helix methodology) test ought to be done every time, unless you can verify a setup script. In that case you'd just need to show that the script followed (electronic or even manual checklist?) was identical to that that was validated.