Over a million developers have joined DZone.

Security Concerns for Legacy Systems

· Integration Zone

Learn how API management supports better integration in Achieving Enterprise Agility with Microservices and API Management, brought to you in partnership with 3scale

An Ongoing process

Information security is a quality attribute that can’t easily be retrofitted. Concerns such as authorisation, authentication, access and data protection need to be defined early so they can influence the solution's design.

However, many aspects of information security aren’t static. External security threats are constantly evolving and the maintainers of a system need to keep up-to-date to analyse them. This may force change on an otherwise stable system.

Functional changes to a legacy system also need to be analysed from a security standpoint. The initial design may have taken the security requirements into consideration (a quality attribute workshop is a good way to capture these) but are they re-considered when features are added or changed? What if a sub-component is replaced or services moved to a remote location? Is the analysis re-performed?

It can be tempting to view information security as a macho battle between evil, overseas (people always think they come from another country) hackers and your own underpaid heroes but many issues have simple roots. Many data breaches or not hacks but basic errors - I once worked at a company where an accountant intern accidentally emailed a spreadsheet with everyone’s salary to the whole company.

Let’s have a quick look at some of the issues that a long running, line-of-business application might face:

Lack of Patching

Have you applied all the vendors’ patches? Not just to the application but the software stack beneath? Has the vendor applied patches to third party libraries that they rely upon? What about the version of Java/.net that the application is running or the OS beneath that? When an application is initially developed it will use the latest versions but unless a full dependency tree is recorded the required upgrades can be difficult to track. It is easy to forget these dependant upgrades even on an actively developed system.

Even if you do have a record of all components and subcomponents, there is no guarantee that, when upgraded, they will be compatible or work as before. The level of testing can be high and this acts as a deterrent to change - you only need a single broken component for the entire system to be at risk.

Passwords

Passwords are every operations team’s nightmare. Over the last 20 years the advice for best-practice, generating, and storing of passwords has changed dramatically. Users used to be advised to think of an unusual password and not write it down. However it turns out that ‘unusual’ is actually very common with people picking the same ‘unusual’ word. Leaked password lists from large websites have demonstrated how many users pick the same password. Therefore the advice and allowable passwords for modern systems have changed (often multiple word sentences). Does your legacy system enforce this or is it filled with passwords from a brute-force list?

Passwords also tend to get shared over time. What happens when someone goes on holiday, a weekly report needs to be run, but the template exists within a specific user’s account? Often they are phoned up and asked for their password. This may indicate a feature flaw in the product but is very common. There are many ways to improve this; from frequent password modifications to two factor authentication but these increase the burden on the operations team.

Does your organisation have an employee leaver’s process? Do you suspend account access? If you have shared accounts (“everyone knows the admin password") this may be difficult or disruptive. Having a simple list (or preferably an automated script) to execute for each employee that leaves is important.

There are similar problems with cryptographic keys. Are they long enough to comply with the latest advice? Do they use a best practice algorithm or one with a known issue? It is amazing how many websites use old certificates that should be replaced or have even expired. How secure is your storage of these keys?

Are any of your passwords or keys embedded in system files? This may have seemed safe when the entire system was on a single machine in a secure location but if the system has been restructured this may no longer be the case. For example, if some of the files have been moved to a shared or remote location, it may be possible for a non-authorised party to scan them.

Moving from Closed to Open Networks

A legacy system might have used a private, closed network for reasons of speed and reliability but it may now be possible to meet those quality attributes on an open network and vastly reduce costs. However, if you move services from closed networks to open networks you have to reconsider the use of encryption on the connection. The security against eavesdropping/network sniffing was a fortunate side-effect of the network being private, so the requirement may have not been captured - it was a given. This can be dangerous if the original requirements are used for restructuring. These implicit quality attributes are important and whether a feature change creates new quality attributes should be considered. You might find these cost-saving changes dropped on you by an excited accountant (who thinks their brilliance has just halved communications charges) with little warning!

Moving to an open network will make services reachable by unknown clients. This raises issues from Denial-of-Service attacks through to malicious clients attempting to use bad messages (such as SQL injection) to compromise a system. There are various techniques that can be applied at the network level to help here (VPNs, blocking unknown IPs, deep packet inspection etc) but ultimately the code being run in the services need to be security aware - this is very, very hard to do to an entire system after it is written.

Migrating to an SOA or micro-service architecture increases these effects as the larger number of connections and end-points now need to be secured. A well modularised system may be easy to distribute but intra-process communication is much more secure than inter-process or inter-machine.

Modernising Data Formats

Migrating from a closed, binary data format to an open one (e.g. xml) for messaging or storage makes navigating the data easier, but this applies to casual scanning by an attacker as well. Relying on security by obscurity isn’t a good idea (and this is not an excuse to avoid improving the readability of data) but many systems do. When improving data formats you should re-consider where the data is being stored, what has access and whether encryption is required.

Similar concerns should be addressed when making source-code open source. Badly written code is now available for inspection and attack vectors can be examined. In particular you should be careful to avoid leaking configuration into the source code if you intending making it open.

New Development and Copied Data

If new features are developed for a system that has been static for a while, it is likely that new developer, test, QA and pre-production environments will be created. (The originals will either be out of date or not kept due to cost). The quickest and most accurate way to create test environments is to clone production. This works well but copied data is as important as the original. Do you treat this copied data with the same security measures as production? If you have proprietary or confidential customer information then it should be. Note that the definition of ‘confidential’ varies but you might be surprised at how broad some regulators make it. You may also be restricted in the information that you can move out of the country - is your development or QA team located overseas?

Remember, you are not just restricting access to your system but your data as well.

Server Consolidation

Systems that pushed the boundaries of computing power 15 years ago, can now be run on a cheap commodity server. Many organisations consolidate their systems on a regular basis, replacing multiple old servers with a single powerful one. An organisation may have been through this process many times. If so, how has this been done and has this increased the visibility of these processes/services to others? If done correctly, with virtualisation tools, then the virtual machines should still be isolated but this is still worth checking. However, a more subtle problem can be caused by the removal of the infrastructure between services. There may no longer be routers or firewalls between the services (or virtual ones with a different setup) as they now sit on the same physical device. This means that a vulnerable, insecure server is less restricted - and therefore a more dangerous staging point if compromised.

A server consolidation process should, instead, be used as an opportunity to increase the security and isolation of services as virtual firewalls are easy to create and monitoring can be improved.

Improved Infrastructure Processes

Modifications to support processes can create security holes. For example, consider the daily backup of an application’s data. The architect of a legacy system may have originally expected backups to be placed onto magnetic tapes and stored in a fire-safe near to the server itself (with periodic backups taken securely offsite).

A more modern process would use offsite, real-time replication. Many legacy systems have had their backup-to-tape processes replaced with a backup-to-SAN which is replicated offsite. This is simple to implement, faster, more reliable and allows quicker restoration. However, who now has access to these backups? When a tape was placed in a fire-safe, the only people with access to the copied data were those with physical access to the safe. Now it can be accessed by anyone with read permission in any location the data is copied. Is this the same group of people as before? It is likely to be a much larger group (over a wide physical area) and could include those with borrowed passwords or those that have left the organisation.

Any modifications to the backup processes need to be analysed from an information security perspective. This is not just for the initial backup location but anywhere else the data is copied to.

Conclusion

Information security is an ongoing process that has multiple drivers, both internal and external to your system. The actions required will vary greatly between systems and depend on the system architecture, its business function and the environment it exists within. Any of these can change and affect the security. Architectural thinking and awareness are central to providing this and a good place to start is a diagram and a risk storming session (with a taxonomy).

Unleash the power of your APIs with future-proof API management - Create your account and start your free trial today, brought to you in partnership with 3scale.

Topics:

Published at DZone with permission of Robert Annett, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}