Death by Fat
Join the DZone community and get the full member experience.Join For Free
Well, a lot of different loose ends have been simmering in a pot together, and some new inputs have come in so this is sure to be a mishmash, but the basic question is: We know our technology ventures have continued to get fatter and fatter, but are we not at an inflection point where we might be setting off a series of irreversible breakdowns? Research into fat recently has shown that we all meander into it, like most addictions, pretty sure that we can just depress the break when needed, and perhaps pop the rig into reverse. But in fact, studies have shown that that is for sure not the case. In one, they let two groups go crazy eating and not exercising for as little as 3 weeks and 9 months later almost no one had made it back to where they were before the study. This is one of my biggest themes on here: the myth of reversibility, the hubris of thinking we can unscramble the egg whenever we want.
The post by Drew Crawford Why Webapps Are Slow shows that, like the typical hermit crabs of the real estate boom in the last decade, we have been just crouching in flimsy little lean-tos waiting for the ever-rising stream to carry us, and that‘s just not happening anymore. Mobile blew this whole strategy up anyway. There aren‘t enough cores and memory on a phone to let the horrible garbage collector do its bidding. It‘s probably the case that the arc of mobile prior to the Jobs Intervention (the unexpected one) in 2007 would have ended up somewhere not too far from where we are, but it probably would have been another decade.
Here is the big question though. Or would it have? Do arcs that are built on sliding windows of weight-gain and sloth end up eventually getting where they are going or do they just break down after a while?
For some interesting additional reading, turns out that Herb Sutter linked to Drew's post and the comments are fascinating. Those poor C++ guys thought they would escape the Java deluge by staying with Mickey and then Mickey stabbed them in the back by making C#, which is essentially just another version of Java.
Then last night I was fiddling around with Play 2 and decided to get a license to IntelliJ Ultimate Edition because the Play support looked pretty good. OMG, what a nightmare. I had stuck with Eclipse, despite my reams of well-documented frustrations with it, because it was fast and IntelliJ was so abysmally slow, but then, using Android Studio last few weeks I thought wow, the performance is OK? Well, the performance of Ultimate Edition so far is horrible. And the parts of the interface and docs I‘ve touched have been really bad. For instance, I read all over the place that 12 UE supported Play "out of the box." When the product installed, it made me choose from huge stupid lists of crap to either support or not support. Turns out saying yes to Play in there is not saying yes to Play 2! Furthermore, you need the Scala Plugin, too. Oh, and if you thought pointing IntelliJ to a project that had already been made for it by Play would trigger the consciousness that it needs something else, sorry. (I wonder if any of the people working on this project have been in the chorus of small ball whiners about Siri and the like, 'cause guys, I don‘t think I‘m going to be able to issue you a single complaint card based on your performance here ….) How should this have gone? Easy: just install the product, then either have some way of recognizing things in projects that you are shown and bring in the appropriate plugins at that point, or, ask--but ask in a simpler way and make sure if you are only supporting an old (now unused) version you are super clear about that.
The UI for adding the Play 2 and Scala plugins was horrible. Hung on me a couple times. Finally did manage to get the two of them installed.
Then the app was still not recognizing the index reference in the hello world. Just to be super clear: literally just trying to get it to compile and run the test project Play generates, none of my own code in here up to this point. I figured that out. I found a thread on SO that urged me to remake the idea project files in Play this time with with-sources=yes. How could that make a difference? Who knows. Did that and went back in and it did finally recognize index, after a LONG time and beating my quad core i7 CPU to a bloody pulp. Folks, the amount of panting that this tool causes just completely infuriates me.
But then I went to run the test and it kept failing. It was complaining again about index! Even though the compilation succeeded. So I went and ran the test from the Play console. That was pretty slow too, but it passed. Then I went back to Idea and it passed from in there.
I was remembering back the other day to that article some years ago the architect from Facebook wrote about how they had hundreds of instances of memcached servers supporting their "architecture". Ok, first, I am of another mind this time around. Not to sound like too much of a dick, but yeah, that‘s not an architecture. That‘s a way of saying basically "state and all those other really sucky hard problems are just not worth trying to solve, let‘s boil the oceans instead" (bringing people like at most 50 updates from their friends' streams). And, folks, mashing up streams like that is not even a remotely difficult problem. Doing it for millions of people? I don‘t know. I still say no. It‘s pretty much static data and the fact that new things might appear while a session is going on is not that big of a deal. Wow, these are 1960s problems. Anyway, the thing I was thinking about in capping this thread the most recent time was that every problem can be pursued by boiling the oceans and, yes, some will still not be solvable, but at some point you have to say just favoring throwing more resources at something over actually figuring out how to make these things more efficient is complete madness, and it encourages the growth of systems that have lots of structural infirmities that could be life-threatening.
I read an article not long ago that harbored some thoughts similar to ones I have had about the last 2 decades, that I will pose in the form of a riddle. If the VM was created to make it impossible for errant programs to crash a machine, and most of the new languages in the last 20 years are VM languages (Java, C#, Scala), and virtualization was created to allow machines to be divvied up so that various things can run on a single machine and not crash each other, aren‘t these solutions to the same problem? The big bogey man of the C++/Java interregnum was the errant pointer, that it could crash the whole machine in an enterprise or where the web is being served, would spell immediate doom. But really, if there were a way to write a C++ program to handle HTTP requests, and it was running on an AWS or OpenShift instance, there is no real fear of this problem. Anyway, the article I was reading was saying that the language VMs eventually lose on this front because of cost, which was a kind of interesting argument: that the cost of dragging around an extra diaper just ends up getting too much. This dovetails nicely with another theory that I have: Amazon‘s hegemony in the cloud will of course be challenged, and it will be VERY hard for the competitors to not end up in a price war that is brutal. Which would seem to support this hypothesis.
Which brings me to my last point about fat: it doesn‘t work as ballast. It‘s not so easy to strip it off and throw it overboard when speed is needed. You could argue it has performed its initial purpose: stability, but surely PaaS is going to make it evident that this fear need not be harbored in every node of the graph. Frankly, imagine a future where a company like TypeSafe comes along that does an Actor model, but in C++ with all the plumbing already in place, and the ability to dial up instances in the cloud. You don‘t have to worry about freeing or allocating memory, or handling messages: just configure routing and dispatching and start writing your code. In some ways, the success of MongoDB is a harbinger of this future: it's written in C++, can be talked to from any language, can get instances on a bunch of cloud providers, and as this article shows it's the only database really moving right now. If you think about it, it really is time to start using the web to push up the stack and grant developers leverage! Consider two possible paths for the newbie, the little green bill: 1) go pick your platform and your tools and get it all up and running, including building and doing tests, etc., oh, and drivers for databases, and all that crap, THEN start writing your code in a language that wipes your butt for you, or 2) log on to a service and create a new app (like a Play/Akka app) that creates the whole app stub, deploys it to your new instance on OpenShift, and then drops you into your first unit test. Start writing your code immediately. Then when you want to add new services or interconnects (e.g., after the first phase of your Play development has been against the in-memory H2 db you want to go to Mongo), you would go to a services panel, order up a mongo instance, and then there would be a wizard to say "connect this instance to this app on this instance". Done. The good news is, the TypeSafe stack is already doing most of this, save in Java or Scala. Another article last week about how programming has focused solely on making it so a dope can do it did not really take up a fundamental question: were we forced into this position because we let the config/tool/interconnect aspects grow to like 4/5s of the tank? Because I guarantee you, if you took 5 developers and said "here, you can either do this next project with this pile of poorly documented open source stuff, on machines you will largely have to setup", or "here we have chosen this stack, it's installed on this PaaS provider, but we are going to use Objective-C and C++ and we are going to have you attend 5 days of training on C++", the vast majority would opt for the latter. Are you going to teach someone C++ in 5 days? Probably not, but enough to get them started down the road of seeing that they maybe don‘t have to boil the ocean to show people a couple of recent events.
Looks like a bunch of the 8GBs of RAM on my laptop was lunched on by Safari. After freeing some up, IntelliJ has come out of its stupor ... a bit.
Published at DZone with permission of Rob Williams, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.