Maintainable Systems: More on Upgrades
The Integration Zone is brought to you in partnership with Red Hat. Download the IDC Report: The Business Value of Red Hat Integration Products to learn more about Red hat Integration.
A little while ago I wrote a piece about maintainable systems and upgrades. My own upgrade project has been progressing slowly and I'm going to write a few more thoughts.
A successful system might (hopefully) be around for many years. It's highly likely that such a system is going to need upgrading periodically. The user's requirements might change over time, different users might want to use the system, external interfaces may change and, of course, software salespeople will want a reason to charge upgrade fees (I'm such a cynic).
If you intend your system to be successful then your design should allow for upgrades. I'm currently involved in an upgrade project and it's painful. I also have to admit that when designing systems from scratch I've made many of the same mistakes and probably made the user's life difficult. So here are some of my recent problems that I should remember when I'm back on the other side.
It's not a clean install so there is going to be data. Can you migrate all the data in the system? This seems obvious but be careful with accuracy of numbers, text data in other languages and empty and null fields. Consider what could be there and not just what you think should be there. Don't be surprised if a free text field, where you were expecting a number, has the word “five” - or in my case 3M for 3,000,000.
How long will the upgrade take to perform - in particular per data item? Your users may have been running the system for years and have a lot of data. If they are using it in a way that differs from your expectations then they may have a large number of items where you expect a few. In particular make sure that there are no manual steps per data item e.g. having to edit something in a GUI. If it takes 5 minutes to do but I have 5000 items then it would take weeks.
Have you changed an implicit API? Common examples include log files and database schemas. I like to use a log monitoring tool to find exceptions but it is also common to trigger external processes from a log indicating a certain point has been reached. Your end users may have also written SQL scripts to interrogate your back-end database and produce reports. Don't blame your users/customers for doing this as it probably indicates a gap in your product!
Be careful when you 'improve' a GUI. It may be much better but the end users will now need to be re-trained. Like the data migration problem this can be a problem of scale. Telling a system administrator where to enter new user data is not hard but if it's the interface to a Point Of Sale terminal used by 20,000 shop assistants then the costs are high. Having said this I do believe that if you do make a change then you should stick to it – leaving old, deprecated screens as well as the new ones leads to a support and training nightmare. The current product I'm upgrading has four different GUI screens for one type of data element because different customers have either demanded a new screen or refused to stop using an old one. They all work differently and have different bugs in...
Does a change in software process imply a change in the business process? Your users/customers may have adapted the way they work around your software so any improvements you make could force other changes. This could meet with resistance.
Lastly, please consider the final stage of the upgrade process – testing! You've probably/hopefully performed enough tests so you're convinced it works in the way you expect but your customers may have different expectations. They will almost certainly want to perform their own tests. My current project includes a third party finance application. What I want to do is produce reports for the same point in time in the old and the new systems and compare the totals and individual line items between the reports. If I can't generate comparable reports then I'd have to do this comparison by checking the individual lines. If I do it manually I'll get a much lower level of coverage and the edge cases might be missed. If I can't do the comparisons at a fine grain level then I can't track down where any problems are occurring.
My summary is below but I'd love you to send me some additions.
- Minimise manual upgrade actions
- Talk to your users/customers
- How do they actually use the product?
- Can you get a copy of their data to do some test runs?
- The customer is always right. Any deviation from your expectations is free market research
- Make it easy for your users/customers to do their own testing