Scan Providers

Version 2.5.0 is close to being released and comes with the last type of extension exposed to Python: scan providers. Scan providers extensions are not only the most complex type of extensions, but also the most powerful ones as they allow to add support for new file formats entirely from Python!

This feature required exposing a lot more of the SDK to Python and can’t be completely discussed in one post. This post is going to introduce the topic, while future posts will show real life examples.

Let’s start from the list of Python scan providers under Extensions -> Scan providers:

Scan provider extensions

This list is retrieved from the configuration file ‘scanp.cfg’. Here’s an example entry:

The name of the section has two purposes: it specifies the name of the format being supported (in this case ‘TEST’) and also the name of the extension, which automatically is associated to that format (in this case ‘.test’, case insensitive). The hard limit for format names is 9 characters for now, this may change in the future if more are needed. The label is the description. The ext parameter is optional and specifies additional extensions to be associated to the format. group specifies the type of file which is being supported; available groups are: img, video, audio, doc, font, exe, manexe, arch, db, sys, cert, script. file specifies the Python source file and allocator the function which returns a new instance of the scan provider class.

Let’s start with the allocator:

It just returns a new instance of TestScanProvider, which is a class dervided from ScanProvider:

Every scan provider has some mandatory methods it must override, let’s begin with the first ones:

_clear gives a chance to free internal resources when they’re no longer used. In Python this is not usually important as member objects will automatically be freed when their reference count reaches zero.

_getObject must return the internal instance of the object being parsed. This must return an instance of a CFFObject derived class.

_initObject creates the object instance and loads the data stream into it. In the sample above we assume it being successful. Otherwise, we would have to return SCAN_RESULT_ERROR. This method is not called by the main thread, so that it doesn’t block the UI during long parse operations.

Let’s take a look at the TestObject class:

This is a minimalistic implementation of a CFFObject derived class. Usually it should contain at least an override of the CustomLoad method, which gives the opportunity to fail when the data stream is first loaded through the Load method. SetDefaultEndianness wouldn’t even be necessary, as every object defaults to little endian by default. SetObjectFormatName, on the other hand, is very important, as it sets the internal format name of the object.

Let’s now take a look at how we scan a file:

The code above will issue a single warning concerning native code. When _startScan returns SCAN_RESULT_OK, _threadScan will be called from a thread other than the main UI one. The logic behind this is that _startScan is actually called from the main thread and if the scan of the file doesn’t require complex operations, like in the case above, then the method could return SCAN_RESULT_FINISHED and then _threadScan won’t be called at all. During a threaded scan, an abort by the user can be detected via the isAborted method.

From the UI side point of view, when a scan entry is clicked in summary, the scan provider is supposed to return UI information.

This will display a text field with a predefined content when the user clicks the scan entry in the summary. This is fairly easy, but what happens when we have several entries of the same type and need to differentiate between them? There’s where the data member of ScanEntryData plays a role, this is a string which will be included in the report xml and passed again back to _scanViewData as an xml node.

For instance:

Becomes this in the final XML report:

The dnode argument of _scanViewData points to the ‘d’ node and its first child will be the ‘o’ node we passed. the xml argument represents an instance of the NTXml class, which can be used to retrieve the children of the dnode.

But this is only half of the story: some of the scan entries may represent embedded files (category SEC_File), in which case the _scanViewData method must return the data representing the file.

Apart from scan entries, we may also want the user to explore the format of the file. To do that we must return a tree representing the structure of our file:

The enableIDs method must be called right after creating a new FormatTree class. The code above creates a format item with id 1 with a child item with id 2, which results in the following:

Format tree

But of course, we haven’t specified neither labels nor different icons in the function above. This information is retrieved for each item when required through the following method:

The various items are identified by their id, which was specified during the creation of the tree.

The UI data for each item is retrieved through the _formatViewData method:

This will display a custom view with a table and a hex view separated by a splitter:

Custom view

Of course, also have specified the callback for our custom view:

It is good to remember that format item IDs and IDs used in custom views are used to encode bookmark jumps. So if they change, saved bookmark jumps become invalid.

And here again the whole code for a better overview:

If you have noticed from the screen-shot above, the analysed file is called ‘a.t’ and as such doesn’t automatically associate to our ‘test’ format. So how does it associate anyway?

Clearly Profiler doesn’t rely on extensions alone to identify the format of a file. For external scan providers a signature mechanism based on YARA has been introduced. In the config directory of the user, you can create a file named ‘yara.plain’ and insert your identification rules in it, e.g.:

This rule will identify the format as ‘test’ if the first 4 bytes of the file match the string ‘test’: the name of the rule identifies the format.

The file ‘yara.plain’ will be compiled to the binary ‘yara.rules’ file at the first run. In order to refresh ‘yara.rules’, you must delete it.

One important thing to remember is that a rule isn’t matched against an entire file, but only against the first 512 bytes.

Of course, our provider behaves 100% like all other providers and can be used to load embedded files:

Embedded files

Our new provider is used automatically when an embedded file is identified as matching our format.

PDB support (including export of types)

The main feature of the upcoming 2.4 version of Profiler is the initial support for the PDB format. Our code doesn’t rely on the Microsoft DIA SDK and thus works also on OS X and Linux.

Since the PDB format is undocumented, this task would’ve been extremely difficult without the fantastic work on PDBs of the never too much revered Sven B. Schreiber.

Let’s open a PDB file.

As you can see the streams in the PDB can be explored. The TPI stream (the one describing types) offers further inspection.

All the types contained in the PDB can be exported to a Profiler header by pressing Ctrl+R and executing the ‘Dump types to header’ action.

Now the types can be used from both the hex editor and the Python SDK.

We can explore the dumped header by using, as usual, the Header Manager tool.

The type showed above in the hex editor is simple. So let’s look what a more complex PDB type may look like.

The PDB code is also exposed to the SDK. This is a small snippet of code, which dumps all the types to a text buffer and then displays them in a text view.

In order to dump all types to a single header, you can use the DumpAllToHeader method.

YARA 3.2.0 support

The upcoming 2.3 version of Profiler includes support for the latest YARA engine. This new release is scheduled for the first week of January and it will include YARA on all supported platforms.

One inherent technical advantage of having YARA support in Profiler is that it will be possible to scan for YARA rules inside embedded files/objects, like files in a Zip archive, in a CHM file, in an OLEStream, streams in a PDF, etc.

The YARA engine itself has been compiled with all standard modules (except for cuckoo). Even the magic module is available, since libmagic is also supported by Profiler.

The initial YARA integration comes as a hook extension, an action and Python SDK support. The YARA Python support is the official one and differs from it only in the import statement. You can run existing YARA Python code without modification by using the following import syntax:

So let’s start a YARA scan. To do that, we need to enable the YARA hook extension. On Windows remember to configure Python in case you haven’t yet, since all extensions have been written in it.

When a scan is started, a YARA settings dialog will show up.

This dialog lets us choose various settings including the type of rules to load.

There are four possibilities. A simple text field containing YARA rules, a plain text rules file, a compiled rules file or a custom expression which must eval to a valid Rules object.

The report settings specify how we will be alerted of matches. The ‘only matches’ option makes sure that only files (or their sub-files) with a match will be included in the final report. The ‘add to meta-data” option causes the matches to be visible as meta-data strings of a file. The ‘as threats’ option reports every match as a 100% risk threat. The ‘print to output’ option prints the matches to the output view.

Since we had the ‘only matches’ option enabled, we will find only matching files in our final report.

And since we had also the ‘to meta-data’ option enabled, we will see the matches when opening a file in the workspace.

The YARA scan functionality comes also as an action when we find ourselves in a hex view. You can either scan the whole hex data or select a range. Then press Ctrl+R to run an action and select ‘YARA scan’.

In this case we won’t be given report options, since the only thing which can be performed is to print out matches in the output view.

Like this:

Of course, all supported platforms come also with the official YARA command line utility.

Since this has been a customer request for quite some time, I think it will be appreciated by some of our users.

Stripping symbols from an ELF

Just as the previous post about stripping symbols from a Mach-O binary, here’s one about stripping them from an ELF binary.

The syntax to execute the script is the same as in the previous post, only the called function changes:

Here’s the code:

That’s it!

And here again the complete script which you can use to strip both a Mach-O and an ELF binary.

The Linux x64 version of Profiler should be released soon. So stay tuned!

Stripping symbols from a Mach-O

A common mistake many developers do is to leave names of local symbols inside applications built on OS X. Using the strip utility combined with the compiler visibility flags is, unfortunately, not enough.

So I wrote a small script for Profiler to be run from the command line and I integrated it in the build process.

The syntax to execute the script is:

Here’s the whole code:

The code zeroes names of symbols which are not associated to any section in the binary.

Easy. 🙂

Command-line scripting

The upcoming 2.1 version of Profiler adds support for command-line scripting. This is extremely useful as it enables users to create small (or big) utilities using the SDK and also to integrate those utilities in their existing tool-chain.

The syntax to run a script from command-line is the following:

If we want to run a specific function inside a script, the syntax is:

This calls the function ‘bar’ inside the script ‘foo.py’.

Everything following the script/function is passed on as argument to the script/function itself.

When no function is specified, the arguments can be retrieved from sys.argv.

For the command-line above the output would be:

When a function is specified, the number of arguments are passed on to the function directly:

If you actually try one of these examples, you’ll notice that Profiler will open its main window, focus the output view and you’ll see the output of the Python print functions in there. The reason for this behaviour is that the command-line support also allows us to instrument the UI from command-line. If we want to have a real console behaviour, we must also specify the ‘-c’ parameter. Remember we must specify it before the ‘-r’ one as otherwise it will be consumed as an argument for the script.

However, on Windows you won’t get any output in the console. The reason for that is that on Windows applications can be either console or graphical ones. The type is specified statically in the PE format and the system acts accordingly. There are some dirty tricks which allow to attach to the parent’s console or to allocate a new one, but neither of those are free of glitches. We might come up with a solution for this in the future, but as for now if you need output in console mode on Windows, you’ll have to use another way (e.g. write to file). Of course, you can also use a message box.

But a message box in a console application usually defies the purpose of a console application. Again, this issue affects only Windows.

So, let’s write a small sample utility. Let’s say we want to print out all the import descriptor module names in a PE. Here’s the code:

Running the code above with the following command line:

Produces the following output:

Of course, this is not a very useful utility. But it’s just an example. It’s also possible to create utilities which modify files. There are countless utilities which can be easily written.

Another important part of the command-line support is the capability to register logic providers on the fly. Which means we can force a custom scan logic from the command-line.

This script scans a single file given to it as argument. All callbacks, aside from init, are optional (they default to None).

That’s all! Expect the new release soon along with an additional supported platform. 🙂

Raw File System Analysis (FAT32 File Recovery)

This post isn’t about upcoming features, it’s about things you can already do with Profiler. What we’ll see is how to import structures used for file system analysis from C/C++ sources, use them to analyze raw hex data, create a script to do the layout work for us in the future and at the end we’ll see how to create a little utility to recover deleted files. The file system used for this demonstration is FAT32, which is simple enough to avoid making the post too long.

Note: Before starting you might want to update. The 1.0.1 version is out and contains few small fixes. Among them the ‘signed char’ type wasn’t recognized by the CFFStruct internal engine and the FAT32 structures I imported do use it. While ‘signed char’ may seem redundant, it does make sense, since C compilers can be instructed to treat char types as unsigned.

Import file system structures

Importing file system structures from C/C++ sources is easy thanks to the Header Manager tool. In fact, it took me less than 30 minutes to import the structures for the most common file systems from different code bases. Click here to download the archive with all the headers.

Here’s the list of headers I have created:

  • ext – ext2/3/4 imported from FreeBSD
  • ext2 – imported from Linux
  • ext3 – imported from Linux
  • ext4 – imported from Linux
  • fat – imported from FreeBSD
  • hfs – imported from Darwin
  • iso9660 – imported from FreeBSD
  • ntfs – imported from Linux
  • reiserfs – imported from Linux
  • squashfs – imported from Linux
  • udf – imported from FreeBSD

Copy the files to your user headers directory (e.g. “AppData\Roaming\CProfiler\headers”). It’s better to not put them in a sub-directory. Please note that apart from the FAT structures, none of the others have been tried out.

Note: Headers created from Linux sources contain many additional structures, this is due to the includes in the parsed source code. This is a bit ugly: in the future it would be a good idea to add an option to import only structures belonging to files in a certain path hierarchy and those referenced by them.

Since this post is about FAT, we’ll see how to import the structures for this particular file system. But the same steps apply for other file systems as well and not only for them. If you’ve never imported structures before, you might want to take a look at this previous post about dissecting an ELF and read the documentation about C++ types.

We open the Header Manager and configure some basic options like ‘OS’, ‘Language’ and ‘Standard’. In this particular case I imported the structures from FreeBSD, so I just set ‘freebsd’, ‘c’ and ‘c11’. Then we need to add the header paths, which in my case were the following:

Then in the import edit we insert the following code:

Now we can click on ‘Import’.

Import FAT structures

That’s it! We now have all the FAT structures we need in the ‘fat’ header file.

It should also be mentioned that I modified some fields of the direntry structure from the Header Manager, because they were declared as byte arrays, but should actually be shown as short and int values.

Parse the Master Boot Record

Before going on with the FAT analysis, we need to briefly talk about the MBR. FAT partitions are usually found in a larger container, like a partitioned device.

To perform my tests I created a virtual hard-disk in Windows 7 and formatted it with FAT32.

VHD MBR

As you might be able to spot, the VHD file begins with a MBR. In order to locate the partitions it is necessary to parse the MBR first. The format of the MBR is very simple and you can look it up on Wikipedia. In this case we’re only interested in the start and size of each partition.

Profiler doesn’t yet support the MBR format, although it might be added in the future. In any case, it’s easy to add the missing feature: I wrote a small hook which parses the MBR and adds the partitions as embedded objects.

Here’s the cfg data:

And here’s the Python script:

And now we can inspect the partitions directly (do not forget to enable the hook from the extensions).

VHD Partitions

Easy.

Analyze raw file system data

The basics of the FAT format are quite simple to describe. The data begins with the boot sector header and some additional fields for FAT32 over FAT16 and for FAT16 over FAT12. We’re only interested in FAT32, so to simplify the description I will only describe this particular variant. The boot sector header specifies essential information such as sector size, sectors in clusters, number of FATs, size of FAT etc. It also specifies the number of reserved sectors. These reserved sectors start with the boot sector and where they end the FAT begins.

The ‘FAT’ in this case is not just the name of the file system, but the File Allocation Table itself. The size of the FAT, as already mentioned, is specified in the boot sector header. Usually, for data-loss prevention, more than one FAT is present. Normally there are two FATs: the number is specified in the boot sector header. The backup FAT follows the first one and has the same size. The data after the FAT(s) and right until the end of the partition includes directory entries and file data. The cluster right after the FAT(s) usually starts with the Root Directory entry, but even this is specified in the boot sector header.

The FAT itself is just an array of 32-bit indexes pointing to clusters. The first 2 indexes are special: they specify the range of EOF values for indexes. It works like this: a directory entry for a file (directories and files share the same structure) specifies the first cluster of said file, if the file is bigger than one cluster, the FAT is looked up at the index representing the current cluster, this index specifies the next cluster belonging to the file. If the index contains one of the values in the EOF range, the file has no more clusters or perhaps contains a damaged cluster (0xFFFFFFF7). Indexes with a value of zero are marked as free. Cluster index are 2-based: cluster 2 is actually cluster 0 in the data region. This means that if the Root Directory is specified to be located at cluster 2, it is located right after the FATs.

Hence, the size of the FAT depends on the size of the partition, and it must be big enough to accommodate an array large enough to represent every cluster in the data area.

So, let’s perform our raw analysis by adding the boot sector header and the additional FAT32 fields:

Add struct

Note: When adding a structure make sure that it’s packed to 1, otherwise field alignment will be wrong.

Boot sector

Then we highlight the FATs.

FATs

And the Root Directory entry.

Root Directory

This last step was just for demonstration, as we’re currently not interested in the Root Directory. Anyway, now we have a basic layout of the FAT to inspect and this is useful.

Let’s now make our analysis applicable to future cases.

Automatically create an analysis layout

Manually analyzing a file is very useful and it’s the first step everyone of us has to do when studying an unfamiliar file format. However, chances are that we have to analyze files with the same format in the future.

That’s why we could write a small Python script to create the analysis layout for us. We’ve already seen how to do this in the post about dissecting an ELF.

Here’s the code:

We can create an action with this code or just run it on the fly with Ctrl+Alt+R.

Recover deleted files

Now that we know where the FAT is located and where the data region begins, we can try to recover deleted files. There’s more than one possible approach to this task (more on that later). What I chose to do is to scan the entire data region for file directory entries and to perform integrity checks on them, in order to establish that they really are what they seem to be.

Let’s take a look at the original direntry structure:

Every directory entry has to be aligned to 0x20. If the file has been deleted the first byte of the deName field will be set to SLOT_DELETED (0xE5). That’s the first thing to check. The directory name should also not contain certain values like 0x00. According to Wikipedia, the following values aren’t allowed:

  • ” * / : < > ? \ |
    Windows/MS-DOS has no shell escape character
  • + , . ; = [ ]
    They are allowed in long file names only.
  • Lower case letters a–z
    Stored as A–Z. Allowed in long file names.
  • Control characters 0–31
  • Value 127 (DEL)

We can use these rules to validate the short file name. Moreover, certain directory entries are used only to store long file names:

We can exclude these entries by making sure that the deAttributes/weAttributes isn’t ATTR_WIN95 (0xF).

Once we have confirmed the integrity of the file name and made sure it’s not a long file name entry, we can validate the deAttributes. It should definitely not contain the flags ATTR_DIRECTORY (0x10) and ATTR_VOLUME (8).

Finally we can make sure that deFileSize isn’t 0 and that deHighClust combined with deStartCluster contains a valid cluster index.

It’s easier to write the code than to talk about it. Here’s a small snippet which looks for deleted files and prints them to the output view:

This script is to be run on the fly with Ctrl+Alt+R. It’s not complete, otherwise I would have added a wait box, since like it’s now the script just blocks the UI for the entire execution. We’ll see later how to put everything together in a meaningful way.

The output of the script is the following:

We can see many false positives in the list. The results would be cleaner if we allowed only ascii characters in the name, but this wouldn’t be correct, because short names do allow values above 127. We could make this an extra option, generally speaking it’s probably better to have some false positives than missing valid entries. Among the false positives we can spot four real entries. What I did on the test disk was to copy many files from the System32 directory of Windows and then to delete four of them, exactly those four found by the script.

The next step is recovering the content of the deleted files. The theory here is that we retrieve the first cluster of the file from the directory entry and then use the FAT to retrieve more entries until the file size is satisfied. The cluster indexes in the FAT won’t contain the next cluster value and will be set to 0. We look for adjacent 0 indexes to find free clusters which may have belonged to the file. Another approach would be to dump the entire file size starting from the first cluster, but that approach is worse, because it doesn’t tolerate even a little bit of fragmentation in the FAT. Of course, heavy fragmentation drastically reduces the chances of a successful recovery.

However, there’s a gotcha which I wasn’t aware of and it wasn’t mentioned in my references. Let’s take a look at the deleted directory entry of ‘notepad.exe’.

Notepad directory entry

In FAT32 the index of the first cluster is obtained by combining the high-word deHighClust with the low-word deStartCluster in order to obtain a 32-bit index.

The problem is that the high-word has been zeroed. The actual value should be 0x0013. Seems this behavior is common on Microsoft operating systems as mentioned in this thread on Forensic Focus.

This means that only files with a cluster index equal or lower than 0xFFFF will be correctly pointed at. This makes another approach for FAT32 file recovery more appealing: instead of looking for deleted directly entries, one could directly look for cluster indexes with a value of 0 in the FAT and recognize the start of a file by matching signatures. Profiler offers an API to identify file signatures (although limited to the file formats it supports), so we could easily implement this logic. Another advantage of this approach is that it doesn’t require a deleted file directory entry to work, increasing the possibility to recover deleted files. However, even that approach has certain disadvantages:

  1. Files which have no signature (like text files) or are not identified won’t be recovered.
  2. The name of the files won’t be recovered at all, unless they contain it themselves, but that’s unlikely.

Disadvantages notwithstanding I think that if one had to choose between the two approaches the second one holds higher chances of success. So why then did I opt to do otherwise? Because I thought it would be nice to recover file names, even though only partially and delve a bit more in the format of FAT32. The blunt approach could be generalized more and requires less FAT knowledge.

However, the surely best approach is to combine both systems in order to maximize chances of recovery at the cost of duplicates. But this is just a demonstration, so let’s keep it relatively simple and let’s go back to the problem at hand: the incomplete start cluster index.

Recovering files only from lower parts of the disk isn’t really good enough. We could try to recover the high-word of the index from adjacent directory entries of existing files. For instance, let’s take a look at the deleted directory entry:

Deleted entry

As you can see, the directory entry above the deleted one represents a valid file entry and contains an intact high-word we could use to repair our index. Please remember that this technique is just something I came up with and offers no guarantee whatsoever. In fact, it only works under certain conditions:

  1. The cluster containing the deleted entry must also contain a valid file directory entry.
  2. The FAT can’t be heavily fragmented, otherwise the retrieved high-word might not be correct.

Still I think it’s interesting and while it might not always be successful in automatic mode, it can be helpful when trying a manual recovery.

This is how the code to recover partial cluster indexes might look like:

It tries to find a valid file directory entry before and after the deleted entry, remaining in the same cluster. Now we can write a small function to recover the file content.

All the pieces are there, it’s time to bring them together.

Create a recovery tool

With the recently introduced logic provider extensions, it’s possible to create every kind of easy-to-use custom utility. Until now we have seen useful pieces of code, but using them as provided is neither user-friendly nor practical. Wrapping them up in a nice graphical utility is much better.

Home view

What follows is the source code or at least part of it: I have omitted those parts which haven’t significantly changed. You can download the full source code from here.

Here’s the cfg entry:

And the Python code:

When the tool is activated it will ask for the disk file to be selected, then it will show an options dialog.

Options

In our case we can select the option ‘Ascii only names’ to exclude false positives.

The options dialog asks for a directory to save the recovered files. In the future it will be possible to save volatile files in the temporary directory created for the report, but since it’s not yet possible, it’s the responsibility of the user to delete the recovered files if he wants to.

The end results of the recovery operation:

Results

All four deleted files have been successfully recovered.

Three executables are marked as risky because intrinsic risk is enabled and only ‘ntoskrnl.exe’ contains a valid digital certificate.

Conclusions

I’d like to remind you that this utility hasn’t been tested on disks other than on the one I’ve created for the post and, as already mentioned, it doesn’t even implement the best method to recover files from a FAT32, which is to use a signature based approach. It’s possible that in the future we’ll improve the script and include it in an update.

The purpose of this post was to show some of the many things which can be done with Profiler. I used only Profiler for the entire job: from analysis to code development (I even wrote the entire Python code with it). And finally to demonstrate how a utility with commercial value like the one presented could be written in under 300 lines of Python code (counting comments and new-lines).

The advantages of using the Profiler SDK are many. Among them:

  • It hugely simplifies the analysis of files. In fact, I used only two external Python functions: one to check the existence of a directory and one to normalize the path string.
  • It helps building a fast robust product.
  • It offers a graphical analysis experience to the user with none or little effort.
  • It gives the user the benefit of all the other features and extension offered by Profiler.

To better explain what is meant by the last point, let’s take the current example. Thanks to the huge amount of formats supported by Profiler, it will be easy for the user to validate the recovered files.

Validate recovered files

In the case of Portable Executables it’s extremely easy because of the presence of digital certificates, checksums and data structures. But even with other files it’s easy, because Profiler may detect errors in the format or unused ranges.

I hope you enjoyed this post!

P.S. You can download the complete source code and related files from here.

References

  1. File System Forensic Analysis – Brian Carrier
  2. Understanding FAT32 Filesystems – Paul Stoffregen
  3. Official documentation – Microsoft
  4. File Allocation Table – Wikipedia
  5. Master boot record – Wikipedia

Logic Providers

The main feature of the 1.0.0 version of the Profiler is ready and thus it won’t take long for the new version to be released. This post serves as introduction to the topic of logic providers and can in no way cover all the ground.

Logic providers are a new type of extension quite similar to hooks: the callbacks are named the same. Their purpose however is different. Hooks are very powerful, but their purpose is to modify the behavior of generic scans. Logic providers, on the other hand, tell the scan engine which folders to scan, which files, etc.

Let’s take a look at a small logic provider. The ‘logicp.cfg’ entry:

And the ‘missingsecflags.py’ file:

Let’s now take a look at the home view in the Profiler.

Logic provider scan button

As you can see, there’s an additional scan button which belongs to the logic provider we’ve just added. The icon can be customized from the cfg file by specifying an ‘icon’ field (the path is relative to the media folder).

So let’s take a closer look at the code above. When the user clicks the scan button, the init function of our logic provider will be called.

The init function calls getSystem (which returns a CommonSystem base class). This class can be used to initialize the scan engine. By default the system will be initialized to LocalSystem. A logic provider can even create its own system class and then set it with setSystem. As introduction it’s not useful to inspect all the methods in CommonSystem and every possible use, we leave that for future posts. In this simple case it’s not necessary to implement anything complex, we are performing a scan on the local system and so it’s enough to call the addPath method on the default class returned by getSystem.

The function then returns the True value. It can also return False and abort the scan operation. Any other value will be passed as user argument to other callbacks such as: scanning, scanned and end.

That’s a small difference between hooks and logic providers: the init function in hooks can’t abort a scan operation. Another difference is that while hooks don’t have mandatory callbacks, the init function is mandatory for logic providers, since, without it, nothing gets done. Logic providers start their own scanning operations, while hooks just attach to existing operations (even those created by logic providers).

The scanned function has the same syntax as in hooks and doesn’t require an additional explanation. The only thing worth mentioning is that hooks can be selectively called based on the file format (see formats field). This isn’t true for logic providers: their scanning/scanned callbacks will be called for every file. The logic providers API is recent and not written in stone, so it might very well be that in the future a filtering mechanism for formats will be provided for them as well.

As for the content of the scanned function, it just checks two security related flags inside of a Portable Executable and includes in the final report only those files which miss one or both flags.

Logic provider results

The scan could be made more useful to check for specific things like COM modules which miss the ASRL flag and things like that.

Also the extension doesn’t really fully benefit from the advantages brought by logic providers: it could as well be implemented as a hook, perhaps it would be even better. In this case the only advantage it provides is a shortcut in the home view for the user.

An important aspect of logic providers is that the Profiler remembers which logic providers have been used to create a report and calls their rload callback when loading that report. The rload callback exists even for hooks, but for them it’s called in any case provided the hook is enabled. It’s important to remember that the identifying name for logic providers is the value contained between brackets in the cfg file. If it’s changed, the Profiler won’t be able to identify the logic provider and print an error message in the output view.

Since this version of the Profiler also exposes its internal SQLite implementation, it’s now possible to access the internal database (main.db):

Another useful method exposed by the Report class is retrieveFile:

The retrieveFile method retrieves a file based on its name either from the project or from the disk.

Using all these features in conjunction a typical scenario for a logic provider would be:

  • The init callback is called and initializes the scan engine.
  • Information is collected either from the scanning or scanned callback, depending on the needs.
  • The end callback stores the collected data in the main database via the provided SQLite API.
  • The rload callback retrieves the collected data from the main database and creates one or more views to display it to the user.

As already mentioned this post covers only the basics and we’ll try to provide more useful examples in the future.

EML attachment detection and inspection

The upcoming 0.9.9 version of the Profiler includes some very useful SDK additions. Among these, the addEmbeddedObject method (to add embedded objects) and a new hook notification called ‘scanning’. The scanning notification should be used for long operations and/or to add embedded objects. In this post we’ll demonstrate these new features with a little script to detect attachments in EML files.

EML attachments

One of the advantages of using the Profiler is that we are be able to inspect the sub-files of the attachments as well. The screenshot above shows a PNG contained in an ODT attachment. Nice, isn’t it?

But the nicest part is how little code is necessary to extend the functionality of the Profiler. These are the lines to add to the user hook configuration file:

And this is the Python code:

That’s it. Of course, this is just a demonstration, to improve it we could add support for more encodings apart from ‘base64’ like ‘Quoted-Printable’ for instance.

Some email programs like Thunderbird store EML files by appending them in one single file. In fact, as you can see, the screenshot above displays the attachments of an entire Inbox database. 😉

EML attachment types

Also notice that in the code the addEmbeddedObject method is called by specifying a base64 decode filter to load the file. We can, of course, specify multiple filters and Lua ones as well. This makes it extremely easy to load files without having to write code to decode/decrypt/decompress them. The “?” parameter leaves the Profiler to identify the format of the attachment.

Format quota calculator

In the upcoming 0.9.9 version of the Profiler it will be possible to create docked views even in the context of the main window. This feature combined with custom views is extremely useful if we want to create custom reports at the end of a scan.

Some time ago I needed a little script to calculate the format quotas of files in a specific directory and their sub-files: we’ll use this sample to demonstrate the new features. For example we could use it to determine what kind of files and in what percentage the System32 directory on Windows contains. Or we could use it to determine the quotas of files in a Zip archive. To make it even more useful, the script now asks the user before the scan to enter the nesting range to consider. For example the value ‘0’ means all levels (starting from 0). If we want to calculate the quotas of top level files only, we must insert ‘0-0’ (start-end). The files contained in a Zip archive can be calculated with the value ‘1-1’ and if we want to include their sub-files we must insert ‘1’.

System32 quotas

We’re probably going to include the script in the upcoming release. But in case we don’t, in order to try it out, add the following lines to the hooks configuration file:

And create a ‘quotas.py’ file in your ‘plugins/python’ user directory with the following content:

Remember to activate the hook from the UI before running a scan.

Of course, the view will be displayed even after an individual file scan in the workspace.

PDF quotas

In order to improve the script, we could use an external signature database for those file formats not recognized automatically.

This is a perfect example of the capabilities to extend the functionality of the Profiler. While there’s yet no estimated release date for the upcoming version, keep in tune as we hope to publish very interesting stuff soon.