Several weeks ago we received a bug report from a customer that said their game was crashing when using IL2CPP scripting backend. QA verified the bug and assigned it to me for fixing. The project was quite big (although far from the largest ones); it took 40 minutes to build on my machine. The instructions on the bug report said: “Play the game for 5-10 minutes until it crashes”. Sure enough, after following instructions, I observed a crash. I fired up WinDbg ready to nail it down. Unfortunately, the stack trace was bogus:
Clearly, it tried executing an invalid memory address. Although the stacktrace had been corrupted, I was hoping that only a part of the whole stack got corrupted and that I should be able to reconstruct it if I look at memory contents past the stack pointer register. Surely enough, that gave me an idea where to look next:
Here’s a rough reconstructed stacktrace:
Alright, so now I knew which thread was crashing: it was the IL2CPP runtime socket polling thread. Its responsibility is tell other threads when their sockets are ready to send or receive data. It goes like this: there’s a FIFO queue that socket poll requests get put in by other threads, the socket polling thread then dequeues these requests one by one, calls select() function and when select() returns a result, it queues a callback that was in the original request to the thread pool.
So somebody is corrupting the stack badly. In order to narrow the search, I decided to put “stack sentinels” on most stack frames in that thread. Here’s how my stack sentinel was defined:
When it’s constructed, it would fill the buffer with “0xDD”. When it’s destructed, it would check if those values did not change. This worked incredibly well: the game was no longer crashing! It was asserting instead:
Somebody had been touching my sentinel’s privates - and it definitely wasn’t a friend. I ran this a couple more times, and the result was the same: every time a value of “2” was written to the buffer first. Looking at the memory view, I noticed that what I saw was familiar:
These are the exact same values that we’ve seen in the very first corrupted stack trace. I realized that whatever caused the crash earlier was also responsible for corrupting the stack sentinel. At first, I thought that this was some kind of a buffer overflow, and somebody was writing outside of their local variable bounds. So I started placing these stack sentinels much more aggressively: before almost every function call that the thread made. However, the corruptions seemed to happen at random times, and I wasn’t able to find what was causing them using this method.
I knew that memory was always getting corrupted while one of my sentinels is in scope. I somehow needed to catch the thing that corrupts it red handed. I figured to make the stack sentinel memory read only for the duration of the stack sentinel life: I would call VirtualProtect() in the constructor to mark pages read only, and call it again in the destructor to make them writable:
To my surprise, it was still being corrupted! And the message in the debug log was:
This was a red flag to me. Somebody had been corrupting memory either while the memory was read only, or just before I set it to read only. Since I got no access violations, I assumed that it was the latter so I changed the code to check whether memory contents changed right after setting my magic values:
My theory checked out:
At this point I was thinking: “Well, it must be another thread corrupting my stack. It MUST be. Right? RIGHT?”. The only way I knew how to proceed in investigating this was to use data (memory) breakpoints to catch the offender. Unfortunately, on x86 you can watch only four memory locations at a time, that means I can monitor 32 bytes at most, while the area that had been getting corrupted was 16 KB. I somehow needed to figure out where to set the breakpoints. I started observing corruption patterns. At first, it seemed that they are random, but that was merely an illusion due to the nature of ASLR: every time I restarted the game, it would place the stack at random memory address, so the place of corruption naturally differed. However, when I realized this, I stopped restarting the game every time memory became corrupted and just continued execution. This led me to discover that the corrupted memory address was always constant for a given debugging session. In other words, once it had been corrupted once, it would always get corrupted at the exact same memory address as long as I don’t terminate the program:
I set a data breakpoint on that memory address and watched as it kept breaking whenever I set it to a magic value of 0xDD. I figured, this was going to take a while, but Visual Studio actually allows me to set a condition on that breakpoint: to only break if the value of that memory address is 2:
A minute later, this breakpoint finally hit. I arrived at this point in time after 3 days into debugging this thing. This was going to be my triumph. “I finally pinned you down!”, I proclaimed. Or so I so optimistically thought:
I watched at the debugger with disbelief as my mind got filled with more questions than answers: “What? How is this even possible? Am I going crazy?”. I decided to look at the disassembly:
Sure enough, it was modifying that memory location. But it was writing 0xDD to it, not 0x02! After looking at the memory window, the whole region was already corrupted:
As I was ready to bang my head against the wall, I called my coworker and asked him to look whether I had missed something obvious. We reviewed the debugging code together and we couldn’t find anything that could even remotely cause such weirdness. I then took a step back and tried imagining what could possibly be causing the debugger to break thinking that code set the value to “2”. I came up with the following hypothetical chain of events:
1. mov byte ptr [rax], 0DDh modifies the memory location, CPU breaks execution to let debugger inspect the program state
2. Memory gets corrupted by something
3. Debugger inspects the memory address, finds “2” inside and thinks that’s what changed.
So… what can change memory contents while the program is frozen by a debugger? As far as I know, that’s possible in 2 scenarios: it’s either another process doing it, or it’s the OS kernel. To investigate either of these, a conventional debugger will not work. Enter kernel debugging land.
Surprisingly, setting up kernel debugging is extremely easy on Windows. You’ll need 2 machines: the one debugger will run on, and the one you’ll debug. Open up elevated command prompt on the machine which you’re going to be debugging, and type this:
Host IP is the IP address of the machine that has the debugger running. It will use the specified port for the debugger connection. It can be anywhere between 49152 and 65535. After hitting enter on the second command, it will tell you a secret key (truncated in the picture) which acts as a password when you connect the debugger. After completing these steps, reboot.
On the other computer, open up WinDbg, click on File -> Kernel Debug and enter port and key.
If everything goes well, you’ll be able to break execution by pressing Debug -> Break. If that works, the “debugee” computer will freeze. Enter “g” to continue execution.
I started up the game and waited for it to break once so I could find out the address at which memory gets corrupted:
Alright, now that I knew the address where to set a data breakpoint, I had to configure my kernel debugger to actually set it:
After some time, the breakpoint actually hit...
Alright, so what’s going on here?! The sentinel is happily setting its magic values, then there’s a hardware interrupt, which then calls some completion routine, and that writes “2” into my stack. Wow. Okay, for some reason Windows kernel is corrupting my memory. But why?
At first, I thought that this has to be us calling some Windows API and passing it invalid arguments. So I went through all the socket polling thread code again, and found that the only system call that we’ve been calling there was the select() function. I went to MSDN, and spent an hour rereading the docs on select() and rechecking whether we were doing everything correctly. As far as I could tell, there wasn’t really much you could do wrong with it, and there definitely wasn’t a place in docs where it said “if you pass it this parameter, we’ll write 2 into your stack”. It seemed like we were doing everything right.
After running out of things to try, I decided to step into the select() function with a debugger, step through its disassembly and figure out how it works. It took me a few hours, but I managed to do it. It seems that the select() function is a wrapper for the WSPSelect(), which roughly looks like this:
The important part here is the call to NtDeviceIoControlFile(), the fact that it passes its local variable statusBlock as an out parameter, and finally the fact that it waits for the event to be signalled using an alertable wait. So far so good: it calls a kernel function, which returns STATUS_PENDING if it cannot complete the request immediately. In that case, WSPSelect() waits until the event is set. Once NtDeviceIoControlFile() is done, it writes the result to statusBlock variable and then sets the event. The wait completes and then WSPSelect() returns.
IO_STATUS_BLOCK struct looks like this:
On 64-bit, that struct is 16 bytes long. It caught my attention that this struct seems to match my memory corruption pattern: the first 4 bytes get corrupted (NTSTATUS is 4 bytes long), then 4 bytes get skipped (padding/space for PVOID) and finally 8 more get corrupted. If that was indeed what was being written to my memory, then the first four bytes would contain the result status. The first 4 corruption bytes were always 0x00000102. And that happens to be the error code for… STATUS_TIMEOUT! That would be a sound theory, if only WSPSelect() didn’t wait for NtDeviceIOControlFile() to complete. But it did.
After figuring out how the select() function worked, I decided to look at the big picture on how socket polling thread worked. And then it hit me like a ton of bricks.
When another thread pushes a socket to be processed by the socket polling thread, the socket polling thread calls select() on that function. Since select() is a blocking call, when another socket is pushed to the socket polling thread queue it has to somehow interrupt select() so the new socket gets processed ASAP. How does one interrupt select() function? Apparently, we used QueueUserAPC() to execute asynchronous procedure while select() was blocked… and threw an exception out of it! That unrolled the stack, had us execute some more code, and then at some point in the future the kernel would complete the work and write the result to statusBlock local variable (which no longer existed at that point in time). If it happened to hit a return address on the stack, we’d crash.
The fix was pretty straightforward: instead of using QueueUserAPC(), we now create a loopback socket to which we send a byte any time we need to interrupt select(). This path has been used on POSIX platforms for quite a while, and is now used on Windows too. The fix for this bug shipped in Unity 5.3.4p1.
This is one of those bugs that keep you up at night. It took me 5 days to solve, and it’s probably one of the hardest bugs I ever had to look into and fix. Lesson learnt, folks: do not throw exceptions out of asynchronous procedures if you’re inside a system call!