JBIG2 Encoded Malware in PDFs

The upcoming version of Profiler 2.7 adds support for JBIG2 encoding inside PDFs. Although JBIG2 isn’t intended to encode data other than images, it can be used to do so. Quoting the PDF documentation:

The JBIG2Decode filter (PDF 1.4) decodes monochrome (1 bit per pixel) image data that has been encoded using JBIG2 encoding. JBIG stands for the Joint Bi-Level Image Experts Group, a group within the International Organization forStandardization (ISO) that developed the format. JBIG2 is the second version of a standard originally released as JBIG1.

JBIG2 encoding, which provides for both lossy and lossless compression, is useful only for monochrome images, not for color images, grayscale images, or general data. The algorithms used by the encoder, and the details of the format, are not described here. A working draft of the JBIG2 specification can be found through the Web site for the JBIG and JPEG (Joint Photographic Experts Group) committees at http://www.jpeg.org.

Here’s a PDF malware trying to conceal its XFA form by encoding it via JBIG2:

And the decoded content:

While this is in no way common in PDF malware, it’s an effective trick to prevent automatic and manual analysis, since JBIG2 is seldom supported by security tools.

Yet another PDF/XDP Malware

Today we’re going to analyze yet another sample of PDF containing an XDP form. The difference between this sample and the one of my previous post is that this one will be less about JavaScript deobfuscation and more about anti-analysis tricks.

If you want to follow hands-on the analysis, this is the link to the malware sample (password: infected29A). Also make sure to update Profiler to the current 2.6.2 version!

MD5: 4D686BCEE50538C969647CF8BB6601F6
SHA-256: 01F13FE4E597F832E8EDA90451B189CDAFFF80F8F26DEE31F6677D894688B370

Let’s open the Zip archive. The first thing we notice is that the file has been incorrectly identified as CFBF.

That’s because the beginning of the file contains a CFBF signature:

If we were to open the file directly from the file-system, we would be prompted to choose the correct file format:

But as such is not the case, we simply go to the decompressed stream in the Zip archive (or to the CFBF document, it doesn’t matter), position the cursor to the start of the file and press Ctrl+E.

We select the PDF format and then open the newly created embedded file in the hierarchy.

What we’ll notice by looking at the summary is that a stream failed to decompress, because it hit the memory limit. A tool-tip informs us that we can tweak this limit from the settings. So let’s click on “Go to report” in the tool-bar.

This will bring us to the main window. From there we can go to the settings and increase the limit.

In our case, 100 MBs are enough, since the stream which failed to decompress is approximately 90 MBs. Let’s click on “Save settings”, click on “Computer Scan” and then back to our file.

Let’s now repeat the procedure to load the embedded file as PDF and this time we won’t get the warning:

Just for the sake of cleanliness, we can also select the mistakenly identified CFBF embedded file and press “Delete”, in order to remove it from the analysis.

We are informed by the summary that the PDF contains an interactive form and, in fact, we can already see the XDP as child of the PDF.

We could directly proceed with the analysis of the XFA, but let’s just step back a second to analyze a trick this malware uses to break automatic analysis. The XFA is contained in the object 1.0 of the PDF.

Let’s go with the cursor to the stream part of the object (the one in turquoise), then let’s open the context menu and click on “Ranges->Select continuous range” (alternatively Ctrl+Alt+A). This will select the stream data of the object. Let’s now press Ctrl+T to invoke the filters and apply the unpack/zlib filter. If we now click on “Preview”, we’ll notice that an error is reported.

The stream is still decompressed, but it also reports an error. This is one of the trick this malware uses to break automatic analysis: the ZLib stream is corrupted at the very end.

Let’s now open the XFA. Immediately we can see another simple trick to fool identification of the XDP: a newline byte at the start.

Given the huge size of the XDP it’s not wise to open it in the text editor, but we can look at the extracted JavaScript from the summary.

Here are the various parts which make up the JavaScript code:

The first part contains the information needed to construct ROP for the various versions of Adobe Reader. In the last part we can see that the JavaScript code sprays the heap. So probably they rely on a huge image embedded in the XDP (which is actually the reason why the XDP is so big) to trigger the exploit.

The field name is aptly named “ImageCrash”.

Let’s go back to the shellcode part and let’s analyze that. I’m talking about the part of code which starts with:

We could of course copy that part of a text view, remove the \u, then convert to bytes and then apply a filter to reorder them, as in JavaScript the words are in big-endian. But we can do it even more elegantly and make our shellcode appears as an embedded file. So let’s select the byte array from the hex editor:

Let’s now press Ctrl+E and click on the “Filters” button.

What we want to do is to first remove the “\u” escape. So we add the filter misc/replace and specify “\u” as in and nothing as out (we leave ascii mode as default). Now we have stripped the data from the escape characters. Now we need to convert it from ascii hex to bytes. So we add the convert/from_hex filter. The last step, as already mentioned, is that we need to switch the byte order in the words. To do that, we’ll use the lua/custom filter. I only modified slightly the default script:

If you want to avoid this part, you can simply import the filters I created:

By opening the embedded shellcode file, Profiler will have automatically detected the shellcode:

By looking at the hex-view we can already guess where the shellcode is going to download its payload to execute from:

But let’s analyze it anyway. Let’s press Ctrl+A and then Ctrl+R. Let’s execute the action “Debug->Shellcode to executable” to debug the shellcode with a debugger like OllyDbg.

Here’s the (very simple) analysis:

You can also download the Profiler project with the complete analysis already performed (same password: infected29A). Please notice, you’ll be prompted twice for the password: once for the project and once for the Zip archive.

I hope you enjoyed the read!

PDF/XDP Malware Reversing

Recently version 2.6 of Profiler has been released and among the improvements support for XDP has been introduced. For those of you who are unfamiliar with XPD, here’s the Wikipedia description:

“XML Data Package (XDP) is an XML file format created by Adobe Systems in 2003. It is intended to be an XML-based companion to PDF. It allows PDF content and/or Adobe XML Forms Architecture (XFA) resources to be packaged within an XML container.

XDP is XML 1.0 compliant. The XDP may be a standalone document or it may in turn be carried inside a PDF document.

XDP provides a mechanism for packaging form components within a surrounding XML container. An XDP can also package a PDF file, along with XML form and template data. When the XFA (XML Forms Architecture) grammars used for an XFA form are moved from one application to another, they must be packaged as an XML Data Package.”

So I’ll use the occasion to show the reversing of a nice PDF with all the goodies. Let’s open the suspicious PDF.

The PDF is already heavily flagged by Profiler, as it contains many suspicious features.

If we take a look, just out of curiosity, at the object 8 of the PDF we will notice that the XDP data contains a bogus endstream keyword to fool the parsers of security solutions.

Profiler handles this correctly, so we don’t have to do anything, just worth mentioning.

Let’s take a look at the raw XDP data.

As you can see, it is completely unreadable because of the XML escaped characters. Even this is not really important for us, since the XML parser of Profiler handles this automatically, again just worth mentioning.

So let’s open directly the embedded XDP child and we can see a readable and nicely indented XML.

We can see that the XML contains JavaScript code, but Profiler already warns us of this. So let’s just click on the warning.

The code isn’t readable. So let’s select the JavaScript portion and then press Ctrl+R->Beautify JavaScript.

Much better, isn’t it?

The code is quite easy to understand although it’s obfuscated. It takes a value straight from the XDP, processes it and then calls eval on it.

This is the value it takes:

What we want is the result of the processing, before eval is called. So what I did is to modify slightly the JavaScript code like this:

I didn’t paste now the entire value in here as it was way too big, but I did so in the code edit:

At this point, we can just press Ctrl+R->Debug/Execute JavaScript and get the result of the execution.

We will get the following code:

What it does is basically to spray the heap using an array. It changes the payload based on the version of Adobe Reader. The version is retrieved by calling the _l5 function.

Now we could just examine the _l1 or _l2 payloads directly, but just to make sure I let the code generate a spray portion. So I changed the code accordingly and avoided to actually spray a lot of data.

We can run this script in the JavaScript debugger (Ctrl+R->Debug JavaScript).

The final print will give us the payload in memory. We can copy the just the initial part, avoiding the padding. Let’s paste the string into a text editor in Profiler and then Ctrl+R->Hex string to bytes.

If we look at the payload, we can see that the beginning (the marked portion) looks like ROP code. So in order to avoid looking for the gadgets in memory, let’s skip the ROP as it most likely is only going to jump to the actual shellcode. Let’s assume that is the case and thus focus on the data which follows.

We can see a web address at the end of the data. So we could just assume that the shellcode downloads an executable and runs it. But just for the sake of completeness, let’s analyze it.

We can of course disassemble the shellcode by applying a filter to it (Ctrl+T->x86 disasm). But what we’ll do is to use a debugger via Ctrl+R->Shellcode to execute. This way we can quickly step through what it does.

Here’s the commented code:

So yes, in the end it just downloads the file from the address we’ve seen and tries to execute it, then tries to register it as a COM object. Some AV-evasion techniques are also present.

Cheers!

Profiler 2.6

Profiler 2.6 is out with the following news:

– added initial support for XML files
– added support for XDP files (extraction of embedded PDFs)
– exposed the ABC format
– improved the parsing of malformed PDF streams
– fixed the code signing on OS X to meet El Capitan requirements
– fixed the JS debugger on Linux
– various bug fixes and improvements

Enjoy!

Windows Memory Forensics

Let’s begin with an image:

Yep. That’s an icon. In an executable. In a process address space. In a raw memory dump.

And here is the video demonstration:

This is just a proof-of-concept. We still haven’t decided whether to develop this further. It really depends on whether the forensic community is interested in having such a product. So, even a re-tweet will have an impact on our decision. 🙂 We wanted to show what is currently possible and, of course, it’s not the end of the cool things which are possible.

In case we decide to go ahead with the development, we will probably create a beta-test group of potential customers and decide with them a roadmap for a 1.0 version, taking into consideration all those features which are essential to them. What we already support are the Windows versions that go from XP to 10 on the following architectures: x86, x86-PAE, x64. And, of course, the software itself, just like Profiler, runs on Windows, OS X and Linux.

And now to the more technical side if you’re interested. What we have shown in this demonstration is just a Python extension for Profiler. To be more specific, it’s only about 1000 lines of Python code and this includes all the UI views. The bulk of the work went into exposing all the necessary capabilities of our SDK to Python. Of course, all this work also benefits other extensions, not just the memory forensics ones. So, if this project ends with this post, it’s really not a tragedy, as we haven’t lost any significant time developing specific stuff for it.

So why did we choose to write our memory forensics support in Python, rather than in C++, which would’ve taken us a lot less time? The reasons are several. The memory forensic field is always changing rapidly and setting code in stone by compiling it wouldn’t be a good idea. Also, we wanted to give our customers the possibility to inspect the code and to modify it. Just by looking at existing code it’s extremely easy to write new utilities. While on the other hand, having our core engine and UI written in C++, makes our tool very fast. We think this is the perfect combination.

If you’re wondering why we didn’t use Volatility as a backbone, the answer is that it would’ve been incompatible on a licensing level and way too difficult to fit nicely into our existing framework to accomplish what we wanted to do.

We hope you enjoyed the demo and we would be happy to receive your feedback!

Profiler 2.5

Profiler 2.5 is out with the following news:

introduced scan provider extensions
added support for Torrent files
added the capability to display views as dialogs
exposed official Python bindings for capstone
– added new controls to custom views
– updated capstone to 3.0.3
fixed failed allocation security issue
– various bug fixes and improvements

Dialogs from views

In this new edition it’s possible to create dialogs out of views. Just like this:

Of course, it doesn’t have to be a custom view, it can even be a simple text view or a hex view, although usually custom views make more sense for a dialog, as you’ll probably want to show some standard buttons like “Ok” and “Cancel” at the bottom.

Capstone bindings

While Capstone has been part of Profiler for quite some time now, now it’s possible to directly call its official Python bindings. The module can be found under ‘Pro.capstone’ and can be imported easily to be made working with existing code:

Failed allocation security issue

In the Qt framework memory allocations fail silently, at least in the release version. We didn’t notice it, because in the debug version they would at least throw an exception preventing further execution. Since in release the execution wouldn’t be stopped, it was in some cases possible to trigger a failed allocation and then make the program use memory it didn’t own (so basically a buffer overflow). This problem has now been fixed.

Credit goes to the Insid3Code Team for having found and reported the issue.

Enjoy!

Torrent Support

Following our recent introduction to Scan Providers, here’s a first implementation example. In this post we’ll see how to add support for Torrent files in Profiler. Of course, the implementation shown in this post will be available in the upcoming 2.5.0 release.

Let’s start by creating an entry in the configuration file:

For the automatic signature recognition we may rely on a simple one:

Torrent files are encoded dictionaries and they usually start with the announce item. There’s no guarantee for that, but for now this simple matching should be good enough.

The encoded dictionary is in the Beconde format. Fortunately, someone already wrote the Python code to decode it:

We can now load the file and decode its dictionary:

We call the GetDictionary method first time in the _initObject method, so that the parsing occurs when we’re in another thread and we don’t stall the UI.

Let’s display the parsed dictionary to the user:

Dictionary

This is the description extracted from Wikipedia of some of the keys:

  • announce—the URL of the tracker
  • info—this maps to a dictionary whose keys are dependent on whether one or more files are being shared:
    • name—suggested filename where the file is to be saved (if one file)/suggested directory name where the files are to be saved (if multiple files)
    • piece length—number of bytes per piece. This is commonly 28 KiB = 256 KiB = 262,144 B.
    • pieces—a hash list, i.e., a concatenation of each piece’s SHA-1 hash. As SHA-1 returns a 160-bit hash, pieces will be a string whose length is a multiple of 160-bits.
    • length—size of the file in bytes (only when one file is being shared)
    • files—a list of dictionaries each corresponding to a file (only when multiple files are being shared). Each dictionary has the following keys:
      • path—a list of strings corresponding to subdirectory names, the last of which is the actual file name
      • length—size of the file in bytes.

While the dictionary already could suffice to extract all the information the user needs, we may want to present parts of the dictionary in an easier way to read.

First, we’d like to show to the user some meta-data information, which may be contained in the dictionary. To do that, we add a meta-data scan entry:

We also warn the user if the file exceeds the allowed maximum. We perform the whole scan logic in the UI thread, since we’re not doing any CPU intensive operation and thus we return SCAN_RESULT_FINISHED, which causes the _threadScan method not be called.

Here we return the meta-data to the UI:

MetaData

Also it would be convenient to see the list of trackers and files. Let’s start with the trackers:

Trackers

When retrieving data from the dictionary, we also make sure that it is in the correct type, so that the code which handles this data won’t end up generating an exception when trying to process an unexpected type.

And now the files:

Files

And that’s it. Now again the whole code for a better overview:

We could still extract more information from the torrent file. For instance, we could show the list of hashes and to which portion of which file they belong to. If that’s interesting for forensic purposes, we can easily add this view in the future.

Scan Providers

Version 2.5.0 is close to being released and comes with the last type of extension exposed to Python: scan providers. Scan providers extensions are not only the most complex type of extensions, but also the most powerful ones as they allow to add support for new file formats entirely from Python!

This feature required exposing a lot more of the SDK to Python and can’t be completely discussed in one post. This post is going to introduce the topic, while future posts will show real life examples.

Let’s start from the list of Python scan providers under Extensions -> Scan providers:

Scan provider extensions

This list is retrieved from the configuration file ‘scanp.cfg’. Here’s an example entry:

The name of the section has two purposes: it specifies the name of the format being supported (in this case ‘TEST’) and also the name of the extension, which automatically is associated to that format (in this case ‘.test’, case insensitive). The hard limit for format names is 9 characters for now, this may change in the future if more are needed. The label is the description. The ext parameter is optional and specifies additional extensions to be associated to the format. group specifies the type of file which is being supported; available groups are: img, video, audio, doc, font, exe, manexe, arch, db, sys, cert, script. file specifies the Python source file and allocator the function which returns a new instance of the scan provider class.

Let’s start with the allocator:

It just returns a new instance of TestScanProvider, which is a class dervided from ScanProvider:

Every scan provider has some mandatory methods it must override, let’s begin with the first ones:

_clear gives a chance to free internal resources when they’re no longer used. In Python this is not usually important as member objects will automatically be freed when their reference count reaches zero.

_getObject must return the internal instance of the object being parsed. This must return an instance of a CFFObject derived class.

_initObject creates the object instance and loads the data stream into it. In the sample above we assume it being successful. Otherwise, we would have to return SCAN_RESULT_ERROR. This method is not called by the main thread, so that it doesn’t block the UI during long parse operations.

Let’s take a look at the TestObject class:

This is a minimalistic implementation of a CFFObject derived class. Usually it should contain at least an override of the CustomLoad method, which gives the opportunity to fail when the data stream is first loaded through the Load method. SetDefaultEndianness wouldn’t even be necessary, as every object defaults to little endian by default. SetObjectFormatName, on the other hand, is very important, as it sets the internal format name of the object.

Let’s now take a look at how we scan a file:

The code above will issue a single warning concerning native code. When _startScan returns SCAN_RESULT_OK, _threadScan will be called from a thread other than the main UI one. The logic behind this is that _startScan is actually called from the main thread and if the scan of the file doesn’t require complex operations, like in the case above, then the method could return SCAN_RESULT_FINISHED and then _threadScan won’t be called at all. During a threaded scan, an abort by the user can be detected via the isAborted method.

From the UI side point of view, when a scan entry is clicked in summary, the scan provider is supposed to return UI information.

This will display a text field with a predefined content when the user clicks the scan entry in the summary. This is fairly easy, but what happens when we have several entries of the same type and need to differentiate between them? There’s where the data member of ScanEntryData plays a role, this is a string which will be included in the report xml and passed again back to _scanViewData as an xml node.

For instance:

Becomes this in the final XML report:

The dnode argument of _scanViewData points to the ‘d’ node and its first child will be the ‘o’ node we passed. the xml argument represents an instance of the NTXml class, which can be used to retrieve the children of the dnode.

But this is only half of the story: some of the scan entries may represent embedded files (category SEC_File), in which case the _scanViewData method must return the data representing the file.

Apart from scan entries, we may also want the user to explore the format of the file. To do that we must return a tree representing the structure of our file:

The enableIDs method must be called right after creating a new FormatTree class. The code above creates a format item with id 1 with a child item with id 2, which results in the following:

Format tree

But of course, we haven’t specified neither labels nor different icons in the function above. This information is retrieved for each item when required through the following method:

The various items are identified by their id, which was specified during the creation of the tree.

The UI data for each item is retrieved through the _formatViewData method:

This will display a custom view with a table and a hex view separated by a splitter:

Custom view

Of course, also have specified the callback for our custom view:

It is good to remember that format item IDs and IDs used in custom views are used to encode bookmark jumps. So if they change, saved bookmark jumps become invalid.

And here again the whole code for a better overview: