March 27, 2010

Security Book Review: The IDA PRO Book

The IDA PRO Book
Author: Chris Eagle
Editorial: No Starch Press
Publication date: August 12, 2008
ISBN-10: 1593271786
ISBN-13: 978-1593271787


Summary: Do you really want to master the art of disassembly? Start here!

Score: 5/5

Review:
Honestly, when picking up a book that is focused on a single tool, as in this case, my main concerns are: how linked (and limited) the content is
to the tool and its capabilities, if the book can become obsolete soon with new versions of the tool, and what else the material offers to the specific field out of the tool.

In this case, it is fair to say that IDA Pro (http://www.hex-rays.com/idapro/) is the most popular disassembly tool (and debugger now) in the market during the last decade, so covering it is like going deeper into the field of malware analysis, software reverse engineer and
vulnerability research. Beginners can start playing with the evaluation version, while professionals have been using the Pro version for a long time.

Apart from that, the moment I realize Chris Eagle was the book author, it added some excitement to the mix. I know Chris when we released the Scan of the Month 32 challenge on the Honeynet Project (http://old.honeynet.org/scans/scan32/), back in 2004. The challenge was focused on analyzing a home-made malware binary, called RaDa, and Chris was the winner (http://old.honeynet.org/scans/scan32/sols/1-Chris_Eagle/); he even developed an IDA Pro script to unpack the binary and solve it.

Therefore, the book title does not make any justice to its contents :), as this is not only The IDA PRO Book or the unofficial guide, but the modern software disassembly
(static binary analysis) masterpiece and The IDA Pro Bible.

The first two chapters are a must for anyone starting in the world of reversing and disassembly. Something I really liked about the introductory chapters is how the author establishes the relationships between the different functionality available in IDA, and other (more traditional) single tools offering similar capabilities.

Then, the book goes in depth into IDA, getting started, covering the interactive interface and navigation capabilities, including the well-known and the most hidden features, explaining how to manage data types, structures and projects, the beauty of cross-references and graphs, and how to extend and customize IDA for extra advanced analysis (libraries, IDC scripts, plugins, modules, etc). It offers the advance readers the required skills and tools to move their analysis activities to the next level.

Every chapter is preceded by a great introduction explaining what is it about, and when and why this chapter is important for the analyst. Chapters do not simply move over the different menus and capabilities of IDA Pro, but describe them within a context based on the author experience after years of binary analysis, going in depth into the essence and goal of a given feature, the way to use it and the common drawbacks. Chris also uses his experience to highlight what is the most typical finding and tool output in various scenarios and why.

The book ends up with a few chapters that challenge the reader to put in action the skills learned throughout the book into real-world applications. Finally, it covers the new debugging capabilities (dynamic binary analysis) available since IDA version 4.5. For those starting in the field, appendix A points out the differences between the free and the commercial IDA version, and how these may influence your interest on specific book chapters.

The book is highly recommended to both beginners and intermediate/advanced users and professionals, and definitely it is a dense (like the tool it covers) but very easy to read book that becomes a reference in your bookshelves the minute it reaches your hands. Besides that, its contents won't easily become obsolete with new IDA Pro version. It is not a book to read in a couple of nights; this is the kind of "practical" book that I strongly recommend to read with a computer and a running copy of IDA handy, so that you can test all the tips and tricks and practice the topics being discussed.

UPDATE: Amazon review.

Labels: , ,

August 21, 2009

Looking for the right event

Not so long ago, during an incident investigation, I needed to reconstruct a series of events from several Windows systems. I needed to do so from the system that I was using to conduct the whole investigation which had Linux installed in it. That didn't make things easier because, as you will already know, Windows event logs are binary.

Two Google minutes later, I had downloaded a perl script written by Christophe Monniez that was able to do the work. The script turned out to be quite useful (Thanks Christophe!) but I need more. I had lots of events from several systems that were interrelated and needed to be interpreted to be able to understand the way the attack had been conducted, in order to add only the relevant stuff to the timeline. Going back and forward with such a big amount of events searching for the right one wasn't an option, so I decided to provide me with some search capabilities and add my own perl script to do so. The concept is trivial, I wanted to be able to search for some string with in the event, but I want the output to show the complete event instead of the line that matched the string only. You can do this easily with awk, but I rather use perl. Here is my little script in case it can also be helpful to you.

#!/usr/bin/perl
$/ = "\n\n\n";

die "Error: search string missing." if (@ARGV < 1);

while ($line = <stdin>) {

if ($ARGV[0] eq "-v") {
print $line if ($line !~ /$ARGV[1]/i);
} else {
print $line if ($line =~ /$ARGV[0]/i);
}
}


This incident investigation was fairly successful and we had access to one laptop involved in the attack. However the system had been reformated and reinstalled, but some information could be recovered using the usual forensic tools. The event file was partially corrupted and I needed to recover the events that were still available. I rewrote the Christophe's code, that was available under the GPL license, and ended up with the following script that does exactly that.


#!/usr/bin/perl -w

# Process Microsoft event file fragments.
#
# Copyright (c) Jorge D. Ortiz Fuentes, 2009
# Based on Monniez Christophe's code.
# - Added hability to process a fragmented event files.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.

# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#

use strict;
use Getopt::Std;

#
# Help information
#
sub usage {
print STDERR "\nUsage:\n\t$0 [-] \n";
print STDERR "Options:\n";
print STDERR "\t-d\tDebug information.\n";
print STDERR "\t-l\tUse localtime instead of GMT.\n";
print STDERR "\t-u\tPreserve unicode.\n";
print STDERR "\t-h\tPrint this help and exit.\n";
print STDERR "\nfile\tThe evt file to be analyzed.\n\n";
exit 1;
}

#
# Search for the first record inside the file.
# It doesn't require that the signature is DWORD aligned
#
sub next_signature {
(my $debug, my $file) = @_;

my $bytes_read;
my $signature;
my $sig_found = 0;

do {
$bytes_read = read($file, $signature, 1);
die("End of file reached.\n") if ($bytes_read <= 0);
if ($signature eq "L") {
$bytes_read = read($file, $signature, 3);
die("End of file reached. Signature not found.\n")
if ($bytes_read <= 0);
}
$sig_found = 1 if (($signature eq "fLe") && (tell($file) >=8));
} while ($sig_found == 0);
# Move the position in the file 8 bytes backwards, to 4 bytes before
# the signature that is where the length of the record is stored.
seek ($file, -8, 1);
if ($debug) {
print "Record starts in position ", tell($file), "\n";
}
}


#
# Extract record information.
#
sub process_record {
(my $debug, my $file, my $length, my $localtime, my $unicode) = @_;

# Local variables
my $record;
my $t_gen;
my $t_writ;
my $rest;
my $rest_reencoded;

# Process the fixed part of the record (at least 56 bytes and
# it is in position 4):
read($file, $record, 52);
$length -= 56;

# Extract the data from the structure
(my $reserved, my $record_nb, my $time_gen, my $time_writ,
my $event_id, my $event_type, my $nb_strings, my $evt_category,
my $reserved_flag, my $cl_record, my $string_offset,
my $SID_leng, my $SID_offset, my $data_len, my $data_offset) =
unpack "LLLLLSSSSLLLLLL" , $record;
# The reserved field must be 1699505740 otherwise skip this record
if ($reserved == 1699505740) {
# Convert dates into strings
if ($localtime) {
$t_gen = localtime($time_gen) . " localtime";
$t_writ = localtime($time_writ) . " localtime";
} else {
$t_gen = gmtime($time_gen) . " GMT";
$t_writ = gmtime($time_writ) . " GMT";
}
# Print data
print "Record number: $record_nb\n";
print "Time generated: $t_gen\n";
print "Time written: $t_writ\n";
print "Evt ID: $event_id Evt type: $event_type Evt category: $evt_category\n";
if ($debug) {
print "* Reserved: $reserved\n";
print "* $nb_strings strings\n";
print "* String offset: $string_offset\n";
print "* SID Len: $SID_leng SID offset: $SID_offset\n";
print "* Data len: $data_len Data offset: $data_offset\n";
}

# Process the rest of the record: Source program, computer name, SID
# and other strings
if (read($file, $rest, $length) < $length) {
die ("End of file reached while reading strings\n");
}

$rest_reencoded = pack "C*" , unpack "U0C*" , $rest;

# Split into several strings
my @strings = split(/\0\0/, $rest_reencoded);
my $str;
$str = $strings[0];
# hack to suppress unicode
$str =~ s/\0//g unless ($unicode);
print "Program: $str\n";
$str = $strings[1];
# hack to suppress unicode
$str =~ s/\0//g unless ($unicode);
print "Computer: $str\n";
my $i=0;
while ($i < $nb_strings) {
$str = $strings[$i+2];
$str =~ s/\0//g unless ($unicode);
print "String $i: $str\n";
$i++;
}
print "\n\n";
} else {
print "Reserved: $reserved\n" if ($debug);
print STDERR "RECORD REJECTED: reserved value fails to match!\n\n\n";
# Searching continues from where it is since this is a corrupted record
}
}


#
# Main program
#

# Variable declarations
my $evt_file = "";
my $record_sig;
my $record;
my $length;
my $dword;
# Option declarations
our ($opt_d, $opt_h, $opt_l, $opt_u);

# Process the command line parameters
getopts('dhlu');

# Debug option
print "\$opt_d:$opt_d\n" if (defined($opt_d));
# Help option
print "\$opt_h:$opt_h\n" if (defined($opt_d) && defined($opt_h));
# Localtime option
print "\$opt_l:$opt_l\n" if (defined($opt_d) && defined($opt_l));
# Unicode option
print "\$opt_u:$opt_u\n" if (defined($opt_d) && defined($opt_u));

# Obtain the file name
$evt_file = shift(@ARGV);
print "Event file: $evt_file\n" if (defined($opt_d) && defined($evt_file));

if ($opt_h) {
&usage();
}

# Open the selected file in binary mode.
open(FILE, $evt_file) or die "ERR: Couldn't open file $evt_file: $!";
binmode(FILE);

do {
&next_signature($opt_d, *FILE);

# The following condition should never be met, because:
# - A record has been found and the file has been rewinded 8 bytes
# - Or EOF was reached and next signature ended the program
die("End of file reached: Incomplete record.\n")
if (read(FILE, $dword, 4) <= 0);
# Obtain the length of this record
$length = unpack "L", $dword;
# A record should be at least 56 bytes long
if ($length > 51) {
# Read the record and process it
&process_record($opt_d, *FILE, $length, $opt_l, $opt_u);
} else {
# Probably corrupted record
print SDTERR "Record too short found and discarded! (Corrupted?)\n";
if ($opt_d) {
print "Record length was: $length\n";
}
# skip current signature to avoid infinite loop.
seek(FILE, 4, 1);
}
} while (!eof(FILE));
close FILE;

exit(0);


Enjoy!

Labels: , ,

February 15, 2008

Internet Storm Center (ISC) Handler

Last month, January 2008, I became handler of the Internet Storm Center (ISC) - What is this? - It is an honor for me to be the first Spanish handler in History. Today (February 14, 2008) it has been my first shift as the Handler on Duty at the ISC and it has been a lot of fun, always considering that "with great power comes great responsibility" :)


Last week I published my first couple of posts, warning about multiple vulnerabilities in commonly used client software, and about the latest Adobe Reader vulnerability being exploited in the wild, a very serious issue; check that you are running Adobe Reader 8.1.2.

I published a couple of related posts today (plus a VoIP warning), as I strongly thing we need to improve and change the way we manage third-party application updates (on Windows mainly, but other OS too - Linux & Mac), both at the corporate and individual/user level. Only by eliminating vulnerabilities in a quick fashion through software updates, thus reducing the exposure of clients, we are going to be able to mitigate the impact of the security threats we deal with today, being botnets one of the most relevant ones.

For your reference and reading, these have been my first ISC diaries:
In order to get a feeling of what is happening on the Internet from a security perspective, the ISC diary should be one of your browser home pages (I had it as such before been a member).

Labels:

January 20, 2008

Investigating File Deletion from Windows File Servers - Part III

This article is the continuation of parts I and II and it concludes this "Investigating File Deletion from Windows File Servers" series.

In Part I I provided a network capture file in pcap format (file_deletion_full_trace.cap) and asked a few questions about the deletion of some files from a Windows file server:

Q1 - How many files were deleted?
Q2 - When?
Q3 - How?
Q4 - Who did it?
Q5 - From where?


In Part II I answered questions Q1 and Q2. In this final article of the series I'll answer the remaining questions: Q3 to Q5.

Q3 - How?

From our previous investigation to answer Q1 we know that the files were deleted using the SMB protocol. OK, but what tool and/or procedure did the attacker use? Here we can only guess, since there are several SMB client applications out there and the attacker could have created and used his own. But we can try and do an educated guess.

By far the more widely spread SMB client applications is Windows' own "explorer.exe" (Windows Explorer), so we could ask ourselves: Is it possible that the attacker used plain old Windows Explorer and deleted the files by selecting them and pressing the "Del" key on the keyboard (or right-click>delete)? The best way to test this hypothesis would be to set up a test lab, follow the procedure just described while capturing the network traffic and then compare the network trace with the capture file being studied. If you do that you will see that the network traces, in terms of SMB messages, are almost identical. That would allow us to conclude that most probably that was the tool and procedure used by the attacker to delete the files: Windows Explorer, select a file, right-click > delete.

In fact, that's exactly how these particular files were deleted. I know because I did it :-).

Q4 - Who did it?

We know the files were deleted through an SMB session (potentially, each file from a different SMB session) from IP address 10.10.10.11. So we could be tempted to answer right away Q4 and Q5 saying that the files were deleted by whoever was sitting at 10.10.10.11 at that point in time. But we can do better.

When an SMB session is established the user accessing the server needs to prove his or her identity unless anonymous access is being used. In order to see who (which Windows user) was the owner of the session that deleted the files we need to find the corresponding session establishment and look at the authentication tokens.

Note: From now on (Q4 and Q5) I will be concentrating on the deletion of the first file only (\\SERVER1\PROJECT1\file4.txt, FID 0x8004). The same procedure could be applied to the other file (\\SERVER1\C$\Shared Folders\Project1\file8.txt, FID 0x8001) to obtain the corresponding answers for Q4 and Q5. I'll give the answers for that second file too, but I won't be showing the procedure again. Instead, I'll leave that as an exercise for the reader.

Let's start by selecting the packet where file ID 0x8004 was marked for deletion and let's check the SMB User ID in it:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "smb.disposition.delete_on_close==1 and smb.fid==0x8004"

1707 903.726800 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8004

C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==1707" -V find "User ID:"

User ID: 2049


The User ID (2049) is assigned by the server at session establishment (SMB Session Setup AndX Request/Response, smb.cmd=0x73) so let's look for those packets before frame 1707:

C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==1707 or \
(frame.number<=1707 and smb.uid==2049 and smb.cmd==0x73)"
598 10.10.10.3 -> 10.10.10.11 SMB Session Setup AndX Response
828 10.10.10.3 -> 10.10.10.11 SMB Session Setup AndX Response
1527 10.10.10.3 -> 10.10.10.11 SMB Session Setup AndX Response
1707 10.10.10.11 -> 10.10.10.3 SMB Trans2 Request,
SET_FILE_INFO, FID: 0x8004


We see that the last time that UID is assigned before frame 1707 is on frame 1527, SMB Session Setup AndX Response. Let's find its corresponding request:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==1527" -V find "Response to:"
[Response to: 1525]


We see that the request was in frame 1525. Let's check out the authentication part inside that frame:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==1525" -V more
[...]
Ticket
Tkt-vno: 5
Realm: SANS.ORG
Server Name (Service and Instance): cifs/server1.sans.org
Name-type: Service and Instance (2)
Name: cifs
Name: server1.sans.org
enc-part rc4-hmac
Encryption type: rc4-hmac (23)
Kvno: 5
enc-part: C26F4754CD8AEC79F9A8C095147CC075F6038D85074CC0E3...
[...]


The request contains a Kerberos service ticket for service CIFS (SMB) at server "server1.sans.org". This doesn't identify the user yet, but we are getting closer. What we need to do now is see if we can find the request for this ticket to the Ticket Granting Service (TGS) that must have ocurred before, because that request must have contained the Ticket Granting Ticket (TGT) obtained by the user at logon.

Let us find all previous appearances of this service ticket by looking for the encrypted part (enc-part). Using the GUI (Wireshark) we can easily build a filter that will match the whole "enc-part" contents, but to avoid typing too much in the TUI (Text-based User Interface, Tshark) we'll just look for the first four bytes and hope there is no collision:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "kerberos.ticket.data[0:4]==C2:6F:47:54"
824 10.10.10.3 -> 10.10.10.11 KRB5 TGS-REP
826 10.10.10.11 -> 10.10.10.3 SMB Session Setup AndX Request
1525 10.10.10.11 -> 10.10.10.3 SMB Session Setup AndX Request
2813 10.10.10.11 -> 10.10.10.3 SMB Session Setup AndX Request


We see that the service ticket used in 3 different SMB session establishments and before that, on frame 824, we see the ticket in a response from the Ticket Granting Service. That's the frame we are interested on right now. Let us see its contents:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==824" -V more
[...]
Source: 10.10.10.3 (10.10.10.3)
Destination: 10.10.10.11 (10.10.10.11)
[...]
User Datagram Protocol, Src Port: 88 (88), Dst Port: 1076 (1076)
[...]
Kerberos TGS-REP
Pvno: 5
MSG Type: TGS-REP (13)
Client Realm: SANS.ORG
Client Name (Principal): david
Name-type: Principal (1)
Name: david
Ticket
Tkt-vno: 5
Realm: SANS.ORG
Server Name (Service and Instance): cifs/server1.sans.org
Name-type: Service and Instance (2)
Name: cifs
Name: server1.sans.org
enc-part rc4-hmac
Encryption type: rc4-hmac (23)
Kvno: 5
enc-part: C26F4754CD8AEC79F9A8C095147CC075F6038D85074CC0E3...
[...]


We see that the response indicates that the ticket was issued to client principal (user name) "david" of realm "SANS.ORG". That is, user "david" belonging to the Windows domain "sans.org" (david@sans.org).

That answers Q4 for file4.txt: it was "david@sans.org" or someone using his authentication credentials who deleted the file. If you apply the same procedure for file8.txt you will discover that it was deleted using a different identity, namely "user1@sans.org".

Q5 - From where?

We already know the deletions were performed from IP address 10.10.10.11, but can we identify the box that had that IP address at that time, like knowing its computer name or netbios name? The answer is yes, we can.

First, let us find the kerberos request to the TGS corresponding to the response we just saw (frame 824, see above):
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "udp.port==88 and udp.port==1076"
823 568.900098 10.10.10.11 -> 10.10.10.3 KRB5 TGS-REQ
824 568.901698 10.10.10.3 -> 10.10.10.11 KRB5 TGS-REP


That request must contain a Ticket Granting Ticket (TGT) of the user:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==823" -V more
[...]
Ticket
Tkt-vno: 5
Realm: SANS.ORG
Server Name (Service and Instance): krbtgt/SANS.ORG
Name-type: Service and Instance (2)
Name: krbtgt
Name: SANS.ORG
enc-part rc4-hmac
Encryption type: rc4-hmac (23)
Kvno: 2
enc-part: 92EF34EF9024B61AAD506AECE425F632D5958EED812718CD...
[...]


That TGT must have been obtained from the Authentication Service (AS) at some point:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "kerberos.ticket.data[0:4]==92:EF:34:EF"
773 10.10.10.3 -> 10.10.10.11 KRB5 AS-REP
774 10.10.10.11 -> 10.10.10.3 KRB5 TGS-REQ
776 10.10.10.11 -> 10.10.10.3 KRB5 TGS-REQ
796 10.10.10.11 -> 10.10.10.3 KRB5 TGS-REQ
823 10.10.10.11 -> 10.10.10.3 KRB5 TGS-REQ


We see it was obtained on frame 773. Let us find the corresponding request:
C:\>tshark -nn -r file_deletion_full_trace.cap -R "frame.number==773" -V more
[...]
Source: 10.10.10.3 (10.10.10.3)
Destination: 10.10.10.11 (10.10.10.11)
User Datagram Protocol, Src Port: 88 (88), Dst Port: 1069 (1069)
[...]
C:\>

C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "udp.port==88 and udp.port==1069"
772 10.10.10.11 -> 10.10.10.3 KRB5 AS-REQ
773 10.10.10.3 -> 10.10.10.11 KRB5 AS-REP


We see that frame 772 containes the kerberos request for the TGT. Let us see what this request contains:
C:\>tshark -nn -r file_deletion_full_trace.cap \
-R "frame.number==772" -V more
[...]
Source: 10.10.10.11 (10.10.10.11)
Destination: 10.10.10.3 (10.10.10.3)
User Datagram Protocol, Src Port: 1069 (1069), Dst Port: 88 (88)
[...]
Client Name (Enterprise Name): david@sans.org
Name-type: Enterprise Name (10)
Name: david@sans.org
Realm: SANS.ORG
[...]
HostAddress CLIENTXP1<20>
Addr-type: NETBIOS (20)
NetBIOS Name: CLIENTXP1<20> (Server service)


Finally, we see that the request for a TGT contains not only the principal name (david@sans.org), which we already knew, but also the NetBIOS name of the computer from which the request was sent: "CLIENTXP1".

This answers our last question, Q5, for file file4.txt: the file was deleted from the computer CLIENTXP1, which happened to have IP address 10.10.10.11 at that time.

If you care to follow the same procedure for file8.txt you will be able to verify that the hostname from which user1@sans.org authenticated was the same, "CLIENTXP1". Actually, what I did to delete the second file was this: while still logged in as david@sans.org onto clientxp1 I used the "Connect using a different user name" option in Windows Explorer to map the second shared folder and provided the authentication credentials (username and password) of user1@sans.org. Then I proceeded like the first time: I selected the file and pressed Delete.

A final note: I have used the domain name "sans.org" all across the article and you will also find it in the network trace. This doesn't mean than any systems belonging to The SANS Institute (http://www.sans.org/) were actually involved in the test lab I set up nor in the real incidents I based this analysis on. It just happens to be the case that the machines I used in the test lab were two of the virtual machines I use when I teach SECURITY 505: Securing Windows.

All the best,
David

Labels: ,

November 20, 2007

Anti-rootkit Windows Tools: Searching for the Hidden

Yesterday George Bakos, SANS ISC handler, posted an entry asking for tools for malware analysis and removal, something we are involved professionally, or personally with the family ;) Specially, we need to be ready for the holidays and have the incident handling jump bag (USB drive or CD) ready to go and cleanup all the computers around us. If you are interested, check the follow up by Kevin Liston on the SANS ISC handler's diary.

I was involved in some malware cleanup tasks this weekend, so I reviewed my toolkit. One set of tools that should be included in any jump bag are the anti-rootkit tools, given the amount of malware specimens that include rootkit capabilities today. The following list (alphabetically ordered) includes different FREE Windows tools provided by AV vendors or individuals for this specific purpose (we leave other OS (Linux, FreeBSD, etc) aside this time). The list contains the direct tool download link, the main tool web page and author, the current version (as 20/11/2007), and some other details:
The beauty of most of them (unless otherwise noted) is that they do not require any installation. They are single executable files that can be run, with Administrator privileges, from a USB dongle or CD to identify anomalies in the system, such as hidden processes, network connections, files and directories, registry entries, kernel hooks, drivers, etc. Most of these tools are integrated on the respective vendor commercial AV tool.

Rootkits are one of the most complex and advanced malicious software components today, so the tools are mainly focused on the identification phase. The successful removal of a (kernel) rootkit from a system is often a really complex task. For this same reason, you also need to familiarize yourself with the tools output, as it is common to get a few false positives from legitimate artifacts running inside Windows.

Get ready for the holidays! Download all (or a few of) them now, and include these tools on your jump bag. It is highly recommended to run at least 2-3 of these tools to compare the results, trying to find glitches in The Matrix. More information and tools about anti-rootkit technologies are available at antirootkit.com.

I've always been a great fan of rootkit and anti-rootkit technologies, publishing documents about Linux kernel rootkits and rootkits from a defensive perspective. If anyone (magazine, company, vendor, etc) is interested on getting me involved in the in-depth analysis and comparison of all (of several) of the above anti-rootkit products/technologies, let me know (raul DOT siles AT gmail DOT com).

Labels:

October 28, 2007

Enjoy your IDS (Part 1)

Although one would guess that you install an IDS (or more) within your infrastructure to get alerts and react against them, that is usually far from reality. Many of those systems, even quite a few in big organizations, produce logs that are only reviewed when a serious incident happens. I know that is hard to imagine, but let me guess that it can happen to you too.

Unfortunately the amount of alerts generated by an IDS under normal conditions is normally huge, especially when it hasn't been tuned up for the environment that it lives in. In those cases, reading the alerts one after the other is not an option for the average security professional that must dedicate his time to many other things. As computer geeks we believe that computers can do this kind of tasks better than us. So scripting should be your friend for these cases.

In today's post I will not only provide you with my opinion on how to do things, but I will also do some extra work ( :-) ) and contribute with one simple script to extract some info from your alerts file. So let me first define what the script can do for you.

If you don't read your alerts very often, chances are that you have to check a lot of false positives or, at least, not very relevant alerts. So it would be nice if you could separate your data into groups of different alerts and decide if they are relevant to you, maybe for later investigation, or can be safely ignored.

Let's assume that:

  • your IDS is snort,

  • the alerts are generated in the default format. One line per alert with Timestamp interface, alert ID, alert message, priority, protocol, source IP address, source port, destination IP address and destination port)

Your first move could then be starting by grouping the alerts by the alert message (i.e. the supposed attack that has been detected). In each of the groups resultant of that classification you can have alerts that can be safely ignored together with others that must be treated as an incident and the difference could be either the source IP address or the destination IP address (maybe, even the time stamp). So what a useful tool could do is classify the alerts into groups using one of the fields, showing the number of alerts per group. To be able to go deeper in your investigation you also want the tool to be able to fix that field to the value of the group that you want to investigate further (i.e. alert equals portscan) and show the data grouped by another field (for example, source IP address). If you use the criteria that I have just mentioned and you find out that you have several portscanning attempts from one internal address, you can investigate further if:

  • This system is being used by some evil hacker.

  • The system is compromised and some kind of malware is running in it.

This is what this program does. To be able to get the groups of alerts you would use:
alertgrp.py /var/log/snort/alert alert
Then, after the portscan alerts catching your eye, you can try to identify the source IP address by using:
alertgrp.py /var/log/snort/alert srcip alert='.*portscan.*'
You can verify which IP addresses were scanned by the allegedly guilty system using:
alertgrp.py /var/log/snort/alert dstip alert='.*portscan.*',srcip=10.0.0.1

I hope that these examples give you some glimpse of the way the tool can be used. Anyhow, the source is there and you can read it and modify it at your will.

I hope you agree with me that the language that you use to write this script is not very relevant as long as it is able to work with regular expressions and you are familiar enough with it to make it do what you want (Please make my day with a programming language religious war ;-P). In this case I chose Python (instead of Perl or shellscript + awk or grep) because I want to use this script for my next post an integrate this results with a visualization tool to enhance the experience and get more information from a glance to the data. More on that in the best blog near you really soon now.

Please, keep in mind that I am not claiming to by a Python guru (which in fact I am not), so the script might (and it probably will) contain errors. If you find any, let me know and I'll do my best to fix it and update the version. However, using memory to store all the field values instead of just counting them when reading is intentional and you will see why in my next post.

Hope you like to put some regular expressions on your life!

Labels:

October 08, 2007

Investigating File Deletion from Windows File Servers - Part II

This article is the continuation of Part I, where I provided a network capture file in pcap format (file_deletion_full_trace.cap) and asked a few questions about the deletion of some files from a Windows file server:

Q1 - How many files were deleted?
Q2 - When?
Q3 - How?
Q4 - Who did it?
Q5 - From where?


First of all, let me thank those of you who submitted responses even though there was no prize. Thank you so much for sharing! Most of you, though, only included straight answers and not so much information about how you obtained them. That's OK, but if you did things differently than I'll be showing, please leave a comment with your method so we all can learn from it! There's usually more than one way to do... about everything!

OK, let's go for it. In this article I'll be addressing questions Q1 and Q2 and I'll leave the rest for a future article.

Q1 - How many files were deleted?

A quick method to find out which files were deleted through SMB is using wireshark's display filter "smb.disposition.delete_on_close==1" either in the GUI or with "tshark", wireshark's command line version. Note that I'll be showing tshark (text) output to make the post as light as possible, but you are welcome to use the same filters in the GUI if you want to follow along.

Again, using that filter:
C:\> tshark -nn -r file_deletion_full_trace.cap \
-R smb.disposition.delete_on_close==1 -t ad

1707 2007-03-23 09:45:55.436826 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8004
2694 2007-03-23 09:51:29.713222 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8001


We see that two files were marked for deletion. The way file deletion works in SMB is a two step process: first, the file is marked for deletion when closed, then, when the client closes the file the server proceeds to delete it. If we want to see the actual moment of the deletion we can show the "close" commands too:



C:\> tshark -nn -r file_deletion_full_trace.cap \
-t ad -R "smb.disposition.delete_on_close==1 or \
(smb.cmd==0x04 and (smb.fid==0x8004 or smb.fid==0x8001))"

1707 2007-03-23 09:45:55.436826 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8004
1709 2007-03-23 09:45:55.444123 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8004
1722 2007-03-23 09:45:58.063573 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8001
2694 2007-03-23 09:51:29.713222 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8001
2696 2007-03-23 09:51:29.717148 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8001


Frame 1707 marks file 0x8004 for deletion and frame 1709 closes that file, finally provoking its deletion. The same goes for file 0x8001 on frames 2694 and 2696. Frame 1722 shows another close command for a file also identified as 0x8001, but that corresponds to a different file since file identifiers get reused after the previous file has been closed.

Now we need to map those file identifiers to file names. That's easy since Wireshark can tell us if only we ask for payload decoding (option -V in tshark; output trimmed for clarity):

Frame 1707
[...]
Tree ID: 2051
[Path: \\SERVER1\PROJECT1]
[Mapped in: 1659]
Process ID: 1440
User ID: 2049
Multiplex ID: 5504
Trans2 Request (0x32)
[...]
SET_FILE_INFO Parameters
FID: 0x8004
[File Name: \file4.txt]
[Opened in: 1704]
Level of Interest: Set Disposition Information (1013)
[...]
.... ...1 = Delete on close: DELETE this file when closed
[...]
Frame 2694
[...]
Tree ID: 2052
[Path: \\SERVER1\C$]
[Mapped in: 1767]
Process ID: 348
User ID: 2051
Multiplex ID: 28674
Trans2 Request (0x32)
[...]
SET_FILE_INFO Parameters
FID: 0x8001
[File Name: \Shared Folders\Project1\file8.txt]
[Opened in: 2691]
Level of Interest: Set Disposition Information (1013)
[...]
.... ...1 = Delete on close: DELETE this file when closed


Combining the tree paths and the file names we obtain the full network paths of the files being deleted:

FID 0x8004:  \\SERVER1\PROJECT1\file4.txt
FID 0x8001: \\SERVER1\C$\Shared Folders\Project1\file8.txt


Note that although both files may have lived in the same directory in the server (that was the case in this particular case) they were accessed and deleted through different network shares (\\SERVER1\PROJECT and \\SERVER1\C$).

A question remains, though. How does Wireshark know what file name corresponds to which file identifier (FID) and network share? Hint: it's using the ID fields (Tree ID, Process ID, User ID, Multiplex ID). Yes, but how? Would you be able to verify whether Wireshark is correct by checking those fields yourself using appropriate filters without looking at Wiresharks' automatic decoding shown between square brackets? My recomendation: try it! It's fun and you'll learn a lot about SMB!

OK, Q1 solved. On to Q2.

Q2 - When?

This would seem trivial after having found all the information above. However, I don't think it is so much so. Let me explain why.

It might seem obvious that the files were deleted at the following dates and times:

\\SERVER1\PROJECT1\file4.txt:
2007-03-23 09:45:55.436826 (frame 1707)

\\SERVER1\C$\Shared Folders\Project1\file8.txt:
2007-03-23 09:51:29.713222 (frame 2694)


However, I have two problems with that statement.

For one thing, it can be argued that since these frames only contained the request for the files to be marked for deletion the files were actually deleted a little later, when the files were closed. Actually, it would be a little after that, when the server received the close command and then proceeded to delete them, but how long would that be if we want to be exact?

Let me add an additional piece of information to consider. Right after each deletion, we can see in the trace a message sent by the server to the client notifying that the deletion has been completed (filter "smb.nt.notify.action==2", frames 1711 and 2699). Combining this filter with the previous to show the mark for deletion, the close and the notify commands for each file we obtain this:

C:\>tshark -nn -r file_deletion_full_trace.cap -t ad \
-R "smb.disposition.delete_on_close==1 or (smb.cmd==0x04 and \
(smb.fid==0x8004 or smb.fid==0x8001)) or smb.nt.notify.action==2 \
and not frame.number==1722"

1707 2007-03-23 09:45:55.436826 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8004
1709 2007-03-23 09:45:55.444123 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8004
1711 2007-03-23 09:45:55.457336 10.10.10.3 -> 10.10.10.11
SMB NT Trans Response, NT NOTIFY
2694 2007-03-23 09:51:29.713222 10.10.10.11 -> 10.10.10.3
SMB Trans2 Request, SET_FILE_INFO, FID: 0x8001
2696 2007-03-23 09:51:29.717148 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8001
2699 2007-03-23 09:51:29.718439 10.10.10.3 -> 10.10.10.11
SMB NT Trans Response, NT NOTIFY


Now we can be more precise and say that the files were deleted at some point in the following time intervals:

\\SERVER1\PROJECT1\file4.txt, between
2007-03-23 09:45:55.444123 (frame 1709), and
2007-03-23 09:45:55.457336 (frame 1711)

\\SERVER1\C$\Shared Folders\Project1\file8.txt, between:
2007-03-23 09:51:29.717148 (frame 2696), and
2007-03-23 09:51:29.718439 (frame 2699)


However, I still have a problem with those timestamps. Let us concentrate on the first frame (1709), for example. It was seen on the network at 09:45:55.444123 on 2007-03-23, right? OK, but 09:45 in Madrid, in New York, or where? Actually, those of you living in a different time zone than me and trying the above commands will already be seeing different timestamps for the same frame numbers! Stating a date and time without stating the time zone is ambiguous.

To make a long story short, pcap files contain timestamps in UTC and wireshark translates them to local time at display time. In wireshark's website you can find more information.

You can see the difference in the time displayed by changing the time zone in your computer. Try this:

C:\>systeminfo | find "Time Zone"
Time Zone: (GMT+01:00) Brussels, Copenhagen, Madrid, Paris

C:\>tshark -n -r file_deletion_full_trace.cap -t ad \
-R frame.number==1709

1709 2007-03-23 09:45:55.444123 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8004


Then, change your time zone to something different (e.g. GMT) and try again:
(tip: you can invoke the Date and Time control panel applet directly with "control timedate.cpl")

C:\>systeminfo | find "Time Zone"
Time Zone: (GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London

C:\>tshark -n -r file_deletion_full_trace.cap -t ad \
-R frame.number==1709

1709 2007-03-23 08:45:55.444123 10.10.10.11 -> 10.10.10.3
SMB Close Request, FID: 0x8004


Obviously, the above output is from a Windows system. Unix folks will need to play with the environment variable "TZ".

Finally, bear in mind that the option to "Automatically adjust clock for daylight saving changes" also affects the output, although in this particular case it doesn't because the 23mar07 the time hadn't been yet changed for the summer.

With all this, we can conclude that the files were deleted in the time frames stated above (*around* 09:45:55 and 09:51:29 on 2007-03-23), those times being expressed in the timezone "(GMT+01:00) Brussels, Copenhagen, Madrid, Paris" with daylight saving changes enabled.

Answers to Q3 through Q5 to come in a future article...

Labels: ,

August 29, 2007

Investigating File Deletion from Windows File Servers - Part I

I have found myself in this situation a few times now: some critical files disappear from a file server and I am tasked to find out how it happened.

Sometimes I was able to solve the mistery, but other times I couldn't. The most important factor is the information available for me to investigate. Give me a full network trace of the server's traffic and lots of auditing information in the system's logs and I'll tell you that the chances of success are pretty high. Take away any of these elements and things become much more difficult.

Possible causes for important files 'magically' disappearing from a Windows file server are almost infinite. Just to name a few, it could be a bug in the operating system (I haven't seen this kind of bug in many years, but it's certainly possible), malicious software running in the server (this I've seen much more often), or a malicious system administrator or user error (even more often).

Yet another possibility is that someone with valid authentication credentials (e.g. username and password) accessed the folder containing the files through the network using the normal Windows file sharing protocol (SMB/CIFS) and simply deleted them, intentionally or unintentionally. This is the case that I'll be analyzing in detail in this series of articles.

So, how far could you go into finding out who, when, how, and from where removed the files if all you had was a network trace? And if you didn't have a network trace but you had system logs? Do you want to try?

Let us start with the network trace. Here you can find a network capture file, in pcap format (tcpdump, wireshark, etc.) obtained in a lab environment simulating the deletion of some files from a file server. The lab network was just a single Ethernet segment with two systems: a Windows XP (client) and a Windows Server 2003 (server).

If you want to play around with it, (just for fun and the learning experience, no prizes this time, sorry) you can try to find the answers to the following questions:

Q1 - How many files were deleted?
Q2 - When?
Q3 - How?
Q4 - Who did it?
Q5 - From where?

In the next article in this series I'll be showing how to obtain the answers to these questions from the network capture file provided. So, stay tuned!

Labels: ,

January 15, 2007

Windows Command-Line Kung Fu (3): solutions

Please, find below the official solutions for the two RaDaJo WMIC challenges published at the end of 2006. Sorry but there are no winners because we didn't get any response at all :-( We will offer prizes next time! ;-)

Answer to Challenge 1:
First of all, you need to identify what was the flaw the challenge question referred to. You can use the NIST NVD advanced search capabilities querying by "DHCP" in July 2006. There is only one high severity match ;-)

This very critical DHCP flaw (CVE-2006-2372) was announced in the MS06-036 security bulletin on July 11, 2006, affecting all MS main OS; 2000, XP and 2003. An attacker could exploit this buffer overflow vulnerability by sending properly crafted DHCP responses to client's DHCP requests, and remotely execute code, 0wn1ng the client. The original report was found by CybSec and public exploit(s) are available. It specially affects clients with networking capabilities on untrusted environments (for this to work you do not require to have and IP address, yet) , such as WiFi hotspots. How long we need to wait to see this exploit within a Karma module? It is time to design layer-2 (wired and wireless) firewalls!

The simplest solution is to patch. The patch update required to fix this vulnerability is KB914388. Therefore, in order to find exposed system you can run the following WMIC local command:
C:\> wmic qfe | find "KB914388"
If the system is vulnerable, you will get no response. If the patch had been installed you should get a similar response to the following one (local execution in "SULLY" hostname):
C:\>wmic qfe | find "KB914388"
SULLY File 1 KB914388
SULLY Security Update for Windows XP (KB914388) Update KB914388
SYSTEM 7/31/2006 SP3
C:\>
To test the patch installation remotely, you can use the following command just changing the "/node:" argument to check all your Windows boxes :
C:\> wmic /user:Administrator /password:SECRET /node:10.10.10.10 qfe | find "KB914388"
What about the password in the WMIC command? It belongs to the (local or domain) administrator and it'll be recorded by any command-line history tool. Oh my! Consider this risk in your IH processes! From the network perspective, you need to worry about it as much as you do with other standard Windows NetBIOS/RPC protocols, WMIC traffic uses them.

Besides, there is a more effective way of running WMIC commands remotely. Using the WMIC built-in filtering capabilities you can reduce the amount of network traffic generated by the command above, because the filter is applied on the remote WMI agent:
C:\>wmic qfe where HotFixID="KB914388"
Caption CSName Description FixComments HotFixID
InstallDate InstalledBy InstalledOn Name ServicePackInEffect Status
SULLY Security Update for Windows XP (KB914388) Update KB914388
SYSTEM 7/31/2006 SP3
C:\>

Answer to Challenge 2:
The only info the incident handler has is the destination port number captured by the sniffer or NIDS. The goal is to find the service binded to it, following a 2-step process:

1) Check if TCP port 17503 is active and find the process associated to it:
C:\> netstat -nao | find "17503"
TCP 0.0.0.0:17503 0.0.0.0:0 LISTENING 1493
It seems there is no way to get network connections information using WMIC. For this task, netstat is the tool. If you know some VMIC tips and tricks for this, please, let me know. This can be one of the reasons why there is no "lsof -i" Linux-based functionality in Windows.

1.5) [Optional] Get basic information about the process (binary name, options...):
C:\> wmic process where processid=1493 get processid, name, commandline
CommandLine Name ProcessId
nc -l -p 17503 nc.exe 1493
2) Map the process to the service:
There are two main ways of doing this, using the process ID (PID) or the process name. The former is more accurate (unique PID), and the later helps to identify all the services using the same binary.

- By process ID:
C:\> wmic service where processid=1493 get name, pathname, processid
- By process name:
C:\> wmic service where (PathName like "%nc%") get name, pathname, processid

Although the challenges suggested to use WMIC only, it is a very good incident handling practice to double check the status of a system using multiple tools, specially when dealing with advanced malware, such as rootkits. Out of WMIC, step 2 can be accomplished in a more effective way running:
C:\> tasklist /svc | find "1493"
nc.exe 1493 Backdoor Service

How do attackers can create a backdoor netcat service in Windows? They can use the Instrsrv.exe and Srvany.exe tools from the Windows Resource Kit or do it through the built-in sc tool.

<plug> Interested in advanced incident handling training? I will be teaching SANS SEC504, Hacker Techniques, Exploits & Incident Handling, in Dubai next March, 10-15.</plug>

Labels: