For a developer, the possibility of embarking upon a "green field" project is both a blessing and a curse. The blessing of course is that before you you have a blank canvas and a chance to build the perfect solution. You have a chance to avoid all of the mistakes that you've made before. The curse is not as evident. The curse is that you now have the opportunity to make all new mistakes.
I think that the term "green field" is a little misleading. While you may, in fact, have a green field, there exists the possibility that there is an ancient indian burial ground or abandoned chemical dumping site lurking inches beneath the surface. Like our questionable field very few systems exist in a vacuum. The reality is that except for the simplest of systems there are other existing components that your new system depends on or will need to communicate with. Sometimes these other components are, to put it politely, questionable.
As an architect, my first inclination when faced with this dilemma has always been to lobby to replace the offending component. Unfortunately, this isn't always possible. Sometimes the reason is time. Sometimes the reason is knowledge. Sometimes the reason is much more sinister; sometimes the reason is political.
The sad truth is that corporate politics always override common sense and logic. Most of us, unfortunately, have had to learn this lesson the hard way. If politics, especially at a level above you in the organizational chart, are a factor the reality is that whatever argument you make, no matter how cogent and well thought-out, will fall upon deaf ears. While your first inclination may be to actively defend your pristine solution it's also important to recognize that pushing the issue too hard may result in your unceremonious termination. It happens. I've seen it before. While you feel a responsibility for maintaining the purity of your yet unimplemented solution you have to accept the fact that, as Scrum pioneer Ken Schwaber so eloquently put it, "a dead sheepdog is a useless sheepdog."
At this point, you have a decision to make. You're at a fork in the road. On one hand, you can choose to "fight the good fight" and potentially put your future on the line or you can concede and accept the questionable component as is. If you pick the first option, I bid you godspeed in your arguably futile endeavor. If you feel that the battle is more important than retaining your current job you are either not being honest with yourself or you are unhappy with your job and should consider a change regardless. However, if you choose the second option, you now have yet another dilemma. How can you design your system to limit the impact of the offending component? Enter the Political Isolation Pattern.
While most software development patterns are driven by the desire to build quality software this pattern is unique in that it is driven by political necessity. When faced with this situation your best option is to quarantine the offending component away like a diseased rhesus monkey and interact with it through an anti-corruption layer that you define. By creating this layer between your system and the offending component you are both maintaining the integrity of your system and creating an interface to a system that, ideally, will be replaced at some point in the future. When building this layer you should first ask yourself one simple question. In an ideal world, what would the interface for this component look like? The layer that you construct should take that interface and map it to the offending component. By doing this, you are effectively drawing a line in the sand and insulating your system against the potential risks of the other system. This layer also symbolizes a hope that at some point in the future the component will be replaced. If and when it is replaced then, since the "ideal" interface has already been defined, the impact on the rest of your system should theoretically be relatively minimal.
There are two core tenets that define the Political Isolation Pattern.
The first tenet is isolation. The layer that you define should completely insulate your system from the offending component. Partially insulating a component from your system still leaves a surface that can potentially cause contamination. Beyond the risk of contamination, partially isolating a component will also make it more difficult and therefore less likely for the component to be replaced in the future. Cancer is much easier to excise before it spreads to other parts of the body.
The second tenet is accountability. Just because you've conceded to using a subpar component does not mean that you can't make it glaringly obvious that the component may not be ideal. Is it a passive-aggressive approach? Maybe. But cold hard data will be a better justification for replacing the component than one person's unsolicited opinion. From a software development perspective the best way to enforce accountability to is to include in the isolation layer comprehensive logging and performance monitoring. Take care to make sure that this monitoring is as close to the offending component as possible. It should be very clear that your monitoring is focused on the offending component and is not inadvertently skewing your metrics by attempting to measure them. By adhering to this rule you can preempt any argument that the layer that you have created is causing problems that you are falsely attributing to the offending component.
Politics are unfortunately an inescapable reality. When it comes to software development, however, you have the opportunity to be smart about how you deal with politically motivated design decisions and effectively "control the bleeding." This can be accomplished in a tactful way by using this simple pattern and can in and of itself act as motivation for future design decisions.