In the first installment of this series, we discussed why it seems that every few years various folks try to circumvent strong encryption controls, and why that just isn’t going to happen. Basically, we can’t circumvent the algorithms themselves, or the implementations, everybody knows (or can find) strong encryption algorithms, and governments don’t get to keep the keys to themselves anymore. We specifically discussed, last time, why we can’t use our technological voodoo to magically create special backdoors into the algorithms themselves. Today we’re going to show why you can’t put a backdoor in the implementations either.
So, first, let’s look at what the implementations would look like. We can do this in a few different ways. We can put backdoors in applications, libraries, the device operating system, or directly into hardware. Let’s start with the software-centric approaches, where we add backdoors to applications or libraries.
In either case, the backdoored code exists only in the library or the targeted executable. A sufficiently trained engineer (say, one that works at some terrorist help desk) can open the library or executable, disassemble it, and remove the code backdoor. But honestly, why go to all the trouble? it’s not that difficult to just build your own encrypted application using your own implementations of well known, strong encryption algorithms. Granted, most of us prefer to use pre-packaged encryption algorithms from libraries, but that’s because we’re lazy. With a little effort, motivation, and time, we could write our own encryption algorithms if we really wanted to. And I guarantee that terrorist and criminal organizations will want to.
So we can’t insert backdoors into libraries or executables, they’re just too easy to circumvent. So let’s move lower. Into the operating system or the system hardware. After all, the unencrypted information is going to be stored in system memory right? so it certainly exists on the system, in an unencrypted form, maybe we can get to it there?
Well, we can. You can require manufacturers to insert software into either operating systems or system hardware that monitors and reports on the memory contents. This could sit on the system and monitor main memory, caching or sending information of interest. You know what though? we already have this kind of software - they’re called rootkits. Considering how well it worked when a certain large media company distributed rootkits on CDROMs in 2005, I don’t think that’ll go over very well. Plus, we do have this thing caused a constitution in the US at least. I might not be a lawyer, but I don’t think this kind of thing will go over very well in court.
But hey, maybe we don’t need to do all this. Maybe we can require encryption designs that use some kind of man-in-the-middle proxy with escrowed keys. Well, maybe we can, but probably we can’t, and we shouldn’t. More on this next time.