Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Fallacies of Enterprise Computing (Part 2)

DZone's Guide to

The Fallacies of Enterprise Computing (Part 2)

In part two of this series, see what other problems crop up with enterprise computing, ranging from the assumption that all systems are monoliths or the idea that a system is ''finished.''

· Integration Zone
Free Resource

Migrating from On-Prem to Cloud Middleware? Here’s what Aberdeen Group says leading companies should be considering. Brought to you in partnershp with Liaison Technologies

And welcome back to the conclusion of our series, which details some poor thought processes when it comes to enterprise computing. In part one, we covered the first four items on this list:

  1. New technology is always better than old technology.
  2. Enterprise systems are not “distributed systems.”
  3. Business logic can and should be centralized.
  4. Data, object or any other kind of model can be centralized.
  5. The system is monolithic.
  6. The system is finished.
  7. Vendors can make problems go away.
  8. Enterprise architecture is the same everywhere.
  9. Developers need only worry about development problems.

Now, we're taking on the remainder. Let's take a look at how not to look at enterprise computing.

The System Is Monolithic

While this may have been true in older systems (like, around the mainframe era), often the whole point of an enterprise system is to integrate with other systems in some way, even if just accessing the same database. Particularly today, with different parts of the system being revised at different times (presentation changes but business logic remains the same, or vice versa), it’s more important than ever to recognize the different parts of the system will need to deploy, version, and in many cases be developed independently of one another.

This fallacy is often what drives the logic behind building microservice-based systems, so that each microservice can be managed (deployed, versioning, developed, etc) independently. However, despite the fact that many enterprise IT departments are building microservices, they then undo all that good work by then implicitly creating dependencies between the microservices with no mitigating strategy to deal with one or more of those microservices being down or out. This means that instead of explicit dependencies (which might force the department or developers to deal with the problem explicitly), developers will lose track of this possibility until it actually happens in Production—which usually doesn’t end well for anybody.

The System Is Finished

The enterprise is a constantly shifting, constantly changing environment. Just when you think you’ve finished something, the business experts come back with some new requirements or some changes to what you’ve done already. It’s the driving reason behind a lot of the fallacies of both distributed systems and enterprise systems, but more importantly, it’s the underlying impetus behind most, if not all, enterprise software development. Enterprise developers can either embrace this, and recognize that systems need to be able to evolve effectively over time—or look for work in other industries.

This means, then, that anything that gets built here should (dare I say “must”) be built with an eye towards constant-modification and incessant updates. This is partly why agile methodologies have taken the enterprise space with such gusto—as a construction approach, by the fact that agile embraces the idea that everything is constantly in flux, it deals far more easily with the idea that the system is never finished.

Vendors Can Make Problems Go Away

Alternatively, we can phrase this as “Vendors can make problem ‘X’ a vendor problem”, where ‘X’ is one of scalability, security, maintainability, flexibility, and just about any other “ility” you care to name. As much as vendors have been trying to make this their problem, for the better part of two or three decades, they’ve never been able to do so except in some very narrow vertical circumstances. Even in today’s cloud-crazed environment, companies that try to take their existing enterprise systems and move them to the cloud as-is (the classic “lift and shift” strategy) are finding that the cloud has nothing magical in it that makes things scale automagically, secure them, or even make them vastly more manageable than they were before. You can derive great benefits from the cloud, but in most cases you have to meet the cloud halfway—which then means that the vendor didn’t make the problem go away, they just re-cast the problem in terms that make it easier for them to sell you things. (And even then, they can only make a few of those probems go away, often at the expense of making other problems more difficult. As an example of how deployments and dependency management got burned, for example, see “npm-Gate”.)

Enterprise Architecture Is the Same Everywhere

Somehow, there seems to be this pervasive belief that if you’ve done enterprise architecture at company X, you can take those exact same lessons and apply them to your experience at company Y. This might be true if every company had exactly the same requirements, but ask any consultant who’s been engaged with clients for more than a few years, and you’ll find out that the Venn diagram of requirements between any two companies overlaps about 80% or so. But here’s the ugly truth of that secret: if we look at the Venn diagram of all the companies, they aren’t overlapping on the same 80%—it’s always a different 80% between themselves and any other company. Which means, collectively, that the sum total of all companies overlaps across maybe 5%. (All accounting systems agree on what credits and debits are, but from there, the business rules tend to diverge.)

Given that enterprise architecture is highly context-sensitive to the enterprises for which it is being developed, it would stand to reason that enterprise architecture will differ from one company to the next. No matter what the vendor/influencer tries to tell you, no matter how desirable it is to believe, there is no such thing as a “universal enterprise architecture”; not MVC, not n-tier, not client-server, not microservices, not REST, not containers, and not whatever-comes-next.

Developers Need Only Worry About Development Problems

Enterprise systems come with much higher criticality concerns than the average consumer software product. Consider, for a moment, the average iOS or Android application—if it crashes mid-use, the user is obviously annoyed, and if it happens too often, they might uninstall the application entirely, but no signficant monetary loss is incurred to the company. If, on the other hand, the company’s e-commerce system crashes, literally thousands of dollars are potentially being lost per minute (or second, if the scale is that of an Amazon or other large-scale e-tailer) until that system gets back on its feet and can start processing transactions again. And that’s not counting the cost of potential customer service costs or even lawsuits if an order is lost because the system went down mid-transaction and put the data into a corrupted or unrecoverable state. Nor does that consider the intangible costs that come into play when Ars Technica or Forbes or—worst of all—the Wall Street Journal covers the outage in their latest report.

Enterprise systems, by definition, have much higher reliability and recoverability concerns. That means, practically speaking, that any enterprise system must pay much greater attention to how the system is administered, deployed, monitored, managed, and so on. Thanks to the emphasis on the whole “DevOps” thing, this is becoming less of an argument with most developers, but even within companies that don’t subscribe to all of the “DevOps” philosophy, developers will need to spend time thinking (and coding) about how operations staff will do all of the things they need to do to the system after its deployment.

For example, one such concern is that of error management and handling. But first, please repeat after me: “It is never acceptable to find out about an enterprise system outage from your users.” Why this is not an accepted truth is well beyond me, but countless enterprise systems seem to feel it perfectly acceptable to show their users stack traces when things go wrong, or that it’s perfectly acceptable to only worry about restarting the system when a user complaint informs Operations it’s down.

Yes, vendors can often provide certain kinds of management software to look at the system from the outside — keeping track of processes to make sure they’re still running and such — but on the whole, it’s going to be up to developers building the enterprise system to make sure that Operations staff can peer inside the system to make sure everything is running, running smoothly, and can make the changes necessary (such as adding users, changing users’ authorized capabilities, adding new types of things into the system, and so on) without requiring a restart or editing cryptic text files. Management, monitoring, deployment, restarting the system after a failure — these, and more, are all developer responsibilities until the developers provide those capabilities to Operations staff to actually use.

Is iPaaS solving the right problems? Not knowing the fundamental difference between iPaaS and iPaaS+ could cost you down the road. Brought to you in partnership with Liaison Technologies.

Topics:
enterprise java ,enterprise computing ,architecture ,distributed systems

Published at DZone with permission of Ted Neward, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}