Anatomy of an Exploit: Stack Smashing
Anatomy of an Exploit: Stack Smashing
Stack smashing. Buffer overflows. Though they're not that common today, they're really a great starting point for looking at exploits, how they work, and why defensive programming techniques are important.
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
So, stack smashing. Buffer overflows. These exploits have been around forever, and though they're not that common today, they're really a great starting point for looking at exploits, how they work, and why defensive programming techniques are important. Plus, there's plenty of other writeups out there that you can look at if what I'm writing is just too confusing. So anyway, what is stack smashing, what are buffer overflows, and why are they important?
Why they matter. So stack smashing has been around for a really, really long time. The first widespread application of this technique was in the early 90's, though the technique had been known for a significant number of years prior to application. It's lead to the development of other things you've probably heard of but don't really know about, like DEP/W^X (Data Execution Prevention/Write or Execute), and ASLR (Address Space Layout Randomization).
Buffer overflows are also interesting because they're the result of a bad design decision 60 or so years ago.
Calling conventions. If you don't regularly play at the interface of C and Assembler, you likely have no idea what a calling convention is. Basically, it's how functions and procedures are called, and where the data those functions are procedures need is stored. They describe which registers contain what, which ones can be used for general data storage, and how the stack is configured. There's a variety of calling conventions out there, but they all basically configure the stack in a similar way.
Above, we have the stack in a typical program; original image By R. S. Shaw - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=1956587.
So let's look at how the stack works. When I call a function in C, for example, on x86, the stack is set up in a very precise way. Now remember, this is just a chunk of memory in a program, really nothing special about it except for the way it's used. The stack is an artificial organizing construct superimposed on that chunk of memory. Here, I'm calling the function DrawLine(.) from DrawSquare(.). When I do that, I need to do a few things first. I copy the parameters for the DrawLine(.) call into the stack, first, followed by the return pointer, which points to the address I return to when I finish the function. Then, I reserve space for variables in the DrawLine(.) function. If you're programming in C, the compiler will do all of this for you. If you're programming in assembler, well, you get to do this on your own.
So, do you see the problem?
Oh noes! So, here's the problem - imagine you've reserved an array as a local in DrawLine(.). The space for that array is just a block of memory reserved on the stack, above the return pointer. Also important, memory addressing grows down the stack. So, say I've allocated space for 10 ints, resulting in 320 bytes of reserved space. What happens if I write 352 bytes into that array? you guessed it, I overwrite the return pointer.
What happens if I overwrite the return pointer? so what, right? well, at that point, all your base belong to me - I have taken control of the flow of execution.
Now, when you do this in testing, you'll usually get a segmentation fault as the program tries to look into bogus memory space. But what if I carefully craft that pointer so it points somewhere valid? and what if some of the data I stuff into the buffer is actually executable code? Well, then I can effectively make that program do whatever I want.
This is why buffer overflows are a big deal.
Opinions expressed by DZone contributors are their own.