The Glue that Binds Us
The Glue that Binds Us
Join the DZone community and get the full member experience.Join For Free
SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.
Over the past twenty years programmers have evolved from integrating system-level components to writing interfaces to applications that can be immediately accessed by billions of people around the world. Despite the huge changes in infrastructure, the availability of high-level languages, and the abundance of computing resources, not much has changed about the integration of systems, and at the heart of the matter lies the same issue: software is only as good as the glue holding it together. In this article I'd like to share my opinion on how to build strong bonds between systems and avoid common coding hazards.
In 1996 the Reliable Software Technologies Corporation (now defunct) and the University of Illinois came together to write a whitepaper about assessing technologies in order to make good choices about software that one may want to integrate into an existing project. It tackles hard questions about evaluating the robustness of a system despite having little or no access to its underlying code, about proper testing methodology, and how interfaces are supposed to be used, and, believe it or not, it is exactly what we continue to do to this day. So why does software stink? Why are there so, so many problems with the patchwork of heterogeneous frameworks that we today call enterprise software.
Several years ago I heard a recording from the 2008 Black Hat conference (slides can be found here) from pen-testers who, in their assortment of freelance work, stumbled across multiple serious faults in major systems. In a local bank's online banking software he found a simple way to take advantage of the atomicity of a system without validation of inputs, depositing negative dollars into a test account, or changing accounts bypassing authorization. They presented the case of a woman ordering 1800 items online, canceling, but receiving them anyways; this time the system was lacking atomicity between a valid order and a shipment! What did they pinpoint as the major problem behind these two very different problems? You guessed it. The disparate components of the system, created by many developers from various enterprises, were glued together poorly.
Above, a bug that should have been caught in regression testing.
One of the things that I commonly see today (in my work at DataXu, and on Jenni) is the integration of third parties and delivery of custom reporting to clients through SFTP, S3, and like mechanisms. I also see the integration and creation of APIs and SDKs, and in reality the stability of these solutions is based upon the same principles.
- Make a contract and stick to it. Whether you're using an API or sending a vendor a template to follow, ensure that the contract is upheld. If you're developing an API never change a contract; you may deprecate it and eventually phase it out, but changing a contract mid-stream is a violation of the agreement you've made with developers (see my article regarding good API design).
- Trust no one, not even yourself. No matter where your data comes from validate it to the best of your ability. Validation can be done in layers, a typical example would be the third party integration stack, though the general premise below applies universally:
- Data is downloaded from a vendor along with a checksum, both files are verified to exist and have contents.
- The checksum is used to verify the bit-completeness of the file.
- The file is decompressed (if compressed, which is typical), and verified using the standards of the contract to which the vendor has agreed.
- The file is ingested into the system that likewise abides by this contract.
- Test everything all the time! We're not talking about the ideal 100% code coverage, but it does mean running component / unit, acceptance, and integration tests. Testing this way is like an onion, peeling away the layers until you hit code that isn't your own. Thinking like a programmer is not often about thinking about the success case, but rather the nearly innumerable ways in which things fail.
- Monitor for failures. One day someone will change one of the many pieces in the chain of mechanisms that, at the end of it all, present your software with the data that is driving your particular product, and on that day your software will fail. It's up to you to program it to fail gracefully.
- Keep it simple. The most likely answer is the one that requires the least number of assumptions, and likewise the best software is the simplest software that fully accomplishes the task at hand. Writing clear, maintainable code is always preferred. I was once read it is wise to write “code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.” This also includes writing reusable code, never repeating yourself , obeying the Law of Demeter, and so on.
Opinions expressed by DZone contributors are their own.