Patching fast or testing vastly?

Andreas Dannert

President of ISACA Melbourne Chapter

When reading about Windows 10 and its rolling release model, I started thinking ‘how is this going to impact larger organisations?’

Having worked in IT consulting for almost 20 years, I have seen how a lot of different organisations and their IT departments manage software upgrades.

None of the organisations I observed had what I would call a sophisticated and fast paced way of applying relevant updates to their environment. While their existing approaches might have been appropriate five to 10 years ago, I would consider them liabilities these days.

I strongly believe that organisations now need to evaluate the value of testing a patch extensively versus the time it takes to implement the patch and potentially close a vulnerability in a system.

The days when testing for several months is the most sensible way of ensuring there is no impact on an organisation’s existing systems and services are over. This does not mean that an organisation can do away with sociability testing. What I mean is that patch cycles need to be sped up so patches can be applied as soon as they are available when closing major vulnerabilities.

Organisations need to review the risk of patching and breaking something versus not patching for extended periods. While there are no hard and fast rules that apply to every organisation, the ideal scenario is to patch quickly with no impact on existing services. Well architected systems should be able to cope with rolling updates. Organisations need to start looking at adapting their processes to software vendors’ initiatives to make patches available faster and more often.

Automating patching processes as much as possible is also vital to minimise costs and reduce lead times for applying patches. To achieve this, organisations need to meet a number of requirements. First of all a proper enterprise architecture is essential. This helps organisations to understand how services fit together, ensures that redundant services are eliminated and positions patching and software deployment as a core service.

Streamlining an organisation’s service and taking out complexity while maintaining essential services is important as well and can only be achieved by having a planned architecture. The next step is to get an organisation’s IT governance right. By this, I mean everything from proper documentation of systems and services to well documented, optimised and established processes. Too often I have observed large organisations with inefficient processes. In the past this might have been acceptable, but in times when known vulnerabilities are exploited almost immediately, it is not.

While being able to roll out patches quickly is important in fixing vulnerabilities, so is the ability to roll back patches. With shortened refresh cycles and shortened testing timeframes, mistakes are bound to happen no matter how much planning occurs. Providing an automated process of undoing changes at any point in time is a valuable safety net. In addition, test automation can reduce the time needed to test a patch before it can safely be applied to production environments.

Finally, one has to understand that while reducing the time it takes to roll out a critical patch is vital, it is equally important not to create vulnerabilities through patching. If a patch weakens the overall system then it might as well not be applied, or the organisation involved should evaluate the risk that each of the weaknesses presents to its systems. In these cases it is essential to have a good understanding of the threat landscape to identify which vulnerability is less likely to be exploited, given potential attackers and their preferred ways of taking advantage of organisations. Alternatively, being able to sandbox applications that have known vulnerabilities or to which fixes cannot be applied easily should be considered.

Organisations that are already using virtualisation techniques might be better off then ones that don’t. While sandboxing is not necessarily the answer to everything, given that we have seen too many examples of exploits that break out of a sandbox, they do provide another layer of control.

One thing in all of this is clear to me though: Organisations need to ensure that they can apply critical patches in a timely manner, even if the process is not fully automated. It is more likely to have a human making a mistake when manually patching then a well-run, automated process breaking something, as long as the overall architecture supports this approach. Complacency is definitely not the answer and it will be interesting to see how more monolithic applications like SAP will fit into a landscape demanding faster paced changes like in the case of Windows 10 and others.

This article was brought to you by Enex TestLab, content directors for CSO Australia.

Tags: Vulnerabilities, risk managment, sandboxing, Windows 10

Show Comments