The Great Patching Debate

For as long as I can remember there has been a vicious debate about the speed of deploying security patches and tempering it with patience enough to test them. This mantra was born from a number of historical reasons.

Firstly, there were the instances where patches had caused critical servers and applications to fail. Vendors had rushed out their fixes and perhaps not tested them properly. Or, in some instances, they had no idea how other people's applications would react to them and even now, that fact remains unchanged.

Secondly, was the fact that a number of best practices (or 'better' practices, such as COBIT and ITIL) refer to tightly controlling the operating environment, ensuring changes are appropriately tracked, tested and formally signed off prior to release into production.

Thirdly, back in the day, malware versions had highly variable release schedules, ranging from days to weeks as the underground markets and practices were highly immature and nowhere near as monetised. Botnets were relatively new and back then, the size of the botnet was what mattered.

Fast forward to today. Microsoft has redefined itself, setting the bar by writing the SDLC standard from scratch, based on trial and error (a lot of error). Their patches go through a rigorous testing process and with each new generation of their operating systems, they raise the bar further on their capability and security. Likewise, Apple is on a similar journey and playing catch up.

The threat environment has also changed. We've gone from malware being released a week after an advisory down to 30min or less! Botnets now are adaptive, use highly sophisticated encrypted P2P command and control networks. Underground markets are highly mature with brokers and incentives programs—all reputation based. Our threat landscape is getting significantly worse, not better.

And yet, we are still arguing over patches. How many businesses out there are still not patching at all? How many run for years or months without patches? Of mitigating controls, how many still run on entirely flat networks—begging for another outbreak?

Simply put, we need to be more intelligent with how we manage our patches. If you haven't identified the most vital services within your organisation then you are in trouble. These assets must be identified and your network appropriately segregated. If you haven't figured out a way to deploy your patches automatically or at the very least, stagger them in short order, you are in trouble again. Desktops and notebooks are the first and most likely culprits of infection and yet, some of the easiest assets to patch. If you are in an environment where BYOD is the go, then these assets must be segregated from the rest of the corporate network with strict controls to ensure a minimum level of compliance and a clear picture of what services they should be able to access.

Patches might not be able to be deployed automatically across your environment but accepting the status quo should not be an option. Challenge existing paradigms and ask tough questions. When someone tells you it can't be done, ask why. Understand the reasons, determine if they hold any validity and look for ways to address them. There are always ways to reduce the patch cycle, in any organisation—irrespective of the size or complexity. Finally, never accept that 'best practice' means 'only practice'. Standards are a guideline, not law and they are never intended to replace people using their own heads to try and come up with something better.

Show Comments