Firstly, this isn’t a blog about heartbleed, poodle or celebrity photo leaks and the likes. This is about something else we saw recently and something that we think provides important lessons to those involved in security monitoring, protective monitoring (PM) and incident response.
We are talking about the reports of ‘Russian hackers used Windows bug to target Nato’
Let’s ignore the politics and look at the lessons (we will assume the reports are accurate for now and focus on the methods used, not the actual bugs, etc)
The hackers exploited a bug in Windows and used it to spy on computers. They used the same bug to do the same thing to 3rd party suppliers to NATO
Lesson 1 – they targeted the 3rd party suppliers as they knew that was a way in. Are you monitoring your supply chain well enough, are you monitoring every connection to and from them?
The hackers had spent 5 years trying to get in but they only achieved success when they found out about a zero day bug in Windows (August 2014)
Lesson 2 - The hackers had knowledge of an exploit before Microsoft did or at least before they had a patch ready (or at least before the IT admins could patch it). Relying on patching is just not the answer (you may wish to read a previous blog: Homogenisation in the cloud which discusses what would happen to the public cloud in this scenario)
Lesson 3 – they spent 5 years having a go! Do you have the ability to store the last 5 years of data and to analyse it for patterns and the other tell-tale signs of a long and slow campaign?
Lesson 4 – and this may seem like a conflict to 2 – patch, patch, patch. If you can’t patch quickly due to software complexities, etc. then design your PM system and security controls to accommodate that fact. Consider what else you could do – is IPS an option in some cases? Are there other PM countermeasures you can deploy or policies you can change?
It is likely that the attack was targeted and that a document purporting to be about European diplomacy had a malicious payload in it
Lesson 5 – are your users aware of such threats, do they know that such documents could be threats? Do you provide advice to your users?
Lesson 6 - does your threat model include this type of attack and are your PM measures in place to detect or prevent it? Are your threat models advanced enough to consider an attack like this? This one may be tricky, how could your PM system know and is this more of a policy problem or a bigger problem – how do you control incoming and outgoing documents and files? Do you offer a method to identify/verify senders of documents and emails? Can you scan such files for malware before they reach the user?
Lesson 7 – can your anti-malware defences detect malware in files or the likely presence of malware and quarantine them and does your PM system alert you if it did suspect such a thing and can it do this at a central control point such as the email/Internet gateway?
Some reports state that the malware used was a modified version of an old piece of malware that was changed so it couldn’t be detected. Other reports state that the malware was detectable but appeared to be an ancient, benign exploit that most system admins and security analysts would not worry about too much.
Lesson 8 – the same methods and technology will be used over and over again – they may mutate but they will need the same environment to grow; there are common things they will all need (i.e. a method of getting out to the Internet, a place to hide on the host, etc). Ensure your PM system is looking for the tell-tale signs of malware, and not just specific malware.
Lesson 9 – old, seemingly unimportant malware will be used to try to disguise the real problem. The hackers probably realised that any new malware may be detected by its tell-tale signs so accepted this and decided to go with it – disguise it as something that the analyst may interpret as a false positive or not worth the bother…Lesson 9 is a big one for me as it exploits the only part of the PM system where subjectivity comes into play – the analyst.