Our cyber security products span from our next gen SIEM used in the most secure government and critical infrastructure environments, to automated cyber risk reporting applications for commercial and government organisations of all sizes.
One of the challenges in cyber security is how to measure the status of security controls to quantify cyber risk – even controls that should be ubiquitous, baseline and foundational.
This problem has a number of dimensions – for example when looking at maturity it is often necessary to ensure that a technical control (which might be perfectly robust) is governed by a policy and actually generates audit information that enables it to be verifiable.
More common however, is the need to ensure that a technical configuration is (a) correct (i.e. matches policy, intent or compliance requirements) and (b) has been implemented (and to what degree).
The ability to measure this can be difficult in highly distributed environments. This can lead to readings being taken that are based on assumptions, and it is typically these assumptions that are found to be flawed when problems later emerge.
One way in which this can occur is in the configurations or versions of endpoint software on the network. An enterprise-wide Windows rollout, or browser update might have been implemented, but did it reach all the systems it was supposed to cover?
Old laptops that were bought for specific purposes, systems that control physical access and are never directly logged into, contractors using their own specialised equipment or other corporate guests can all have been skipped when changes have been applied. These then are the very vulnerable systems that attackers aim to locate and target, not the several thousand well managed and well patched workstations.
To quantify this problem, there are now several service providers trying to gauge security performance using external sources of information and assumptive benchmarks. However, this in itself is counter intuitive – how can you derive internal network configuration information without looking inside the network itself?
The answer (in the example we are discussing) is that browsers reveal information when they connect to web pages hosted on a server. See https://www.whatismybrowser.com for an example. This blog post is being written on a system running:
That information wouldn’t be any use if you had to trawl every company that the users have ever connected to in order to harvest their browser details. But often when you visit a web page, the adverts that are displayed come from a single set of web advertising companies – and so these sites do have a vast number of end-user browser details across all users and irrespective of the actual web sites the user visited.
Secondly, there is information publicly available about which network addresses are owned and used by companies. This can be used to identify the organisation and geographic location. It’s often used in security to convert an IP address to a location but it can also be used to map the information above to an organisation and even a particular office.
On the surface this seems to answer the question – we have one dataset of browsers and OS versions linked to IP addresses and another linking IP addresses to companies and offices. But does this provide a reliable way to externally connect browser/OS versions and patching status to a specific organisation?
The reality is that it does give an answer, but that answer relies on assumptions, and quite often flawed ones. And that means that any decisions about the state of security controls are similarly flawed.
Some organisations might allow people to connect to their network in a permitted way which means that any Internet access externally from their browsers/workstations will cloud the organisation’s browser/OS version results, as the connections to the outside world (and hence the advertising browser databases) originate from the organisation’s external network address (just not their own, managed systems).
There are many scenarios – guest WiFi networks, external users connecting devices to networks for meetings, contractors using their own systems, employees with mobile devices that are permitted to connect to corporate networks, guests in hotels. An organisation can easily appear to be bad at patching and software configuration just because it has hosted a major career event for hundreds of students.
In addition, users that are part of the organisation might be out of the office, work from home or at customer sites or be in smaller offices where the IP/network provision is given to them by the telecoms provider to the building. These systems, being away from the corporate network, might be the ones that are not regularly updated or patched and hence the riskiest, but because they connect to the Internet from hotels, client sites, Starbucks branches or restaurants they are never associated with the enterprise risk scores that a limited external assessment produces.
In essence, we are making decisions based on results from an incomplete and polluted sample.
The solution is to look within the network itself, where the systems that we want to assess or measure can be directly examined. In the example we’ve been using the devices on the network are easily visible, and it is possible to discern their role or ownership much more easily.
It would be very simple to see a distinction between systems that are on a guest WiFi network (the student conference, the visiting business partners etc.) and the corporate network (where you have employees using IT-issued kit), and consequently include or exclude them in an assessment as appropriate.
If you are trying to validate operating system versions, patches, browsers that are in use, what update schedule they have as part of an audit activity or assess a third party supplier, having the ability to collect metrics on security controls from within the network is crucial.
One perceived problem with this approach is the level of intrusion that the data gathering involves (this is the rationale for using externally visible information). The concern is that if internal systems are being scanned, probed, connected to and interrogated directly, this could put a load on the network and could cause other problems, maybe disrupting activities or triggering security controls that aim to detect vulnerability probes or scans.
However, it is only by interrogating the central management systems that control security, that you get an easy, single point of security control data around (in this case) patching and version information, or backup schedules, malware defences or application usage.
The only exceptions are those systems that fall outside of that umbrella as a result of deliberate exclusion for operational reasons.
To measure cyber risk on a continuous (rather than one-off) basis, you need accurate information that is complete and trustworthy, and you need to be able to collect this from single points of reference rather than interrogating every individual device with a noisy scanning solution. You need to be able to work across network boundaries enabling complex business units to police themselves and large organisations to monitor their external third party supply chains, and you must focus on issues that provide the highest value in security risk terms (like patching and software/OS versions).
Relying on external repositories that are derived from datasets based on assumptions might seem easy, but it is a sure way to get flawed data. It’s not quite guesswork, but it risks providing a view, upon which decisions about risk are made, that are not valid and hence unsafe.
If the choice is a questionable external view or an internal control assessment, then data gathered from within the network itself will always be closer to the truth and the better source of information to use.
By objectively measuring cyber risk from the ‘inside-out’, operations and management teams can reliably verify their security posture and even manage it as part of an enterprise wide improvement program; so if you are interested in taking control of your security posture, there is more information here.
A recent KPMG Report suggests that protecting against and dealing with cyber risks will be the major challenge for senior executives in 2024. It is clear that despite high levels of security investment, organisations continue to suffer from cyber attacks.Read more
The Australian Signals Directorate’s (ASD) recent publication of their Cyber Threat Report 2022-2023 unearthed a range of areas for concern for government departments and critical infrastructure entities at local, State and Federal level.Read more
As cyber risks increase, organisations are encountering the longer life cycle of insurance renewals and the need to demonstrate better management of security controls and their effectiveness.Read more
Highlights and insights from the recent Managed Services Summit in London & the ISACA Central Chapter Conference on Digital Trust, in Birmingham, UK. With two recent conferences in the space of three days, some interesting challenges were very evident in the topics discussed. Being very different events, the challenges were quite different, but interestingly they […]Read more
In early August 2023, the latest joint advisory on persistent vulnerabilities was issued by the intelligence and security agencies of the “Five-eyes” community. These joint advisories are becoming more common. Perhaps recognising the growing importance of shared security information and the common nature of many of the threats faced – the weight they carry makes […]Read more
The quality of your risk assessment and the security information it provides is important; if you plan to use it to actively manage your operational and cyber resilience activities. Organisations are constantly exposed to a rapidly changing threat environment, so you really need a similarly rapid evidence-based feedback system that informs you of the ongoing […]Read more
The UK market has its own regulators, security standards and challenges. And while rulings from SEC in the US or the Australian Prudential Regulation Authority (APRA) in Australia don’t apply to UK companies, for the most part, the observations are undoubtedly relevant and the resulting advice instructive. It would be wrong to think UK financial […]Read more
<<< Part 2a: Australia’s Essential Eight: Beyond Endpoint Control <<< Part 2b: Activating UK NCSC & US NIST Guidelines: Beyond Endpoint Control Part 4: Systematic Measurement of Cyber Controls >>> As much as we invest into cyber security controls, external threats are inevitable. In a recent Notifiable Data Breaches Report from the Office of the […]Read more
Keen campers, scouts and even the Swiss Army know – that a good penknife is indispensable. This simple device has mitigated many a disaster at one point in time or another. Whether it’s to cut through a bit of string, tighten a screw or simply to solve the problem of no bottle opener in the […]Read more
Supply chain risk is an area of cyber security that demands the ongoing attention of every enterprise; because it can make the difference between being resilient or not. It’s no surprise that insurers warn that the vulnerability of supply chains is potentially a systemic risk that can quickly propagate across supply chain dominated industries. Organisations […]Read more
Read by directors, executives, and security professionals globally, operating in the most complex of security environments.