(Photo source: Depositphotos.com)
Doom and Gloom
Cybersecurity has always had its share of doom and gloom. Like any industry focused on threats, risks, losses, and damage, cybersecurity doctrines have propagated a wide variety of imminent threats, malicious actors, and unknown blindspots. Many of these claims are true, and digital businesses have experienced no shortage of incidents to corroborate them. However, the multitude of threats and the infinite technical vectors they exploit have rendered such fire and brimstone proclamations somewhat toothless. Fatigue overtakes concern at a point, and future threats receive less respect than they otherwise would if there were fewer problems maintaining a digital enterprise.
Critical infrastructure, energy, and transportation have long been at the heart of these potential threats. As these crucial services began to digitize, they were exposed to new vectors of risk, and when it comes to services that can mean the difference between life and death, risk is taken seriously. For example, the NERC standards were developed to protect energy companies, specifically industrial control systems (ICS) which act as a gateway between cyberspace and machines operating in material reality. Nearly all of the systems that govern airlines and state infrastructure such as traffic and power grids have been digitized, and the potential for a cyber attack on these resources has never been higher.
But there’s one thing that will cause more potential risk for critical infrastructure than all of the other attacks and threats put together. Gartner estimates that 80-99% of all exploits will take advantage of it through 2020. It doesn’t require a skilled hacker, and every digitized company is subject to it: misconfigurations.
What Are Misconfigurations?
Unlike other cybersecurity threats, misconfigurations are a structural part of digitized business. They are inevitable, because they result from complex IT processes involving teams of human beings, all of which is subject to occasional error. This should be nothing new: doing organized work with a group of people is difficult. The more complex work becomes, and the more it relies on specialized knowledge and must support an ever-growing surface area of capabilities and risks, the more likely a mistake will be made, with an asset delivered without the proper configuration. A sysadmin forgetting to change a password or lockdown a cloud storage bucket is much more likely than a hacker breaking down defenses to steal data.
Sometimes misconfigurations are relatively harmless. There are a lot of settings in enterprise software, and not all of them are equal. But when misconfigurations affect important vectors like which ports are open, which patches are installed, and which versions of an application are running, those misconfigurations become latent threats, capable of doing massive damage to the business if they are found and exploited.
In the case of critical infrastructure, the potential damage of these misconfigurations extends beyond the private enterprise to the people who rely on the services these companies provide, even to the state itself, which regulates these spheres for this exact reason. Yet, despite drastic cybersecurity measures, infrastructure is still vulnerable to the processes which build and maintain it.
When Critical Infrastructure Leaks
In early August, UpGuard discovered an unprotected rsync service handing out data to any anonymous client who connected. Contained within this data were blueprints, reports, and other information about Texas’ Power Quality Engineering (PQE) and their clients. Not only was this infrastructure data available, including information about top secret communications facilities, but also exposed was a list of administrative usernames and passwords for PQE’s other digital services, stored in a plain text file and downloadable for any anonymous internet goer.
The mechanism for this leak, rsync, is an extremely common utility used by IT teams to synchronize data between two or more systems. The problem is that by default, rsync does not offer any access control. It has to be specifically enabled and configured by an administrator. But because IT shops, like all business departments, are focused on functional goals and have limited time and resources, corners are often cut or simply overlooked to get work done on time.
Without process controls in place to catch these oversights, production assets are shipped with or changed over time to improper configurations and all of the functionality these assets provide, all the services they support, and all the people who rely on those services are then at risk. A leak like the PQE leak could easily be an opportunity for any number of groups to disrupt infrastructure and the day-to-day lives of the millions who use it.
A Different Threat, A Different Solution
The increase in cybersecurity spending across all sectors has done little to mitigate the problem of misconfigurations precisely because misconfigurations are not a cybersecurity problem. Trying to layer a security solution on top of a faulty and error prone infrastructure is futile. Misconfigurations aren’t about stopping external actors from manipulating digital infrastructure-- they are about understanding and validating the work processes that take place every day in an IT department.
Human error can’t be fixed or avoided, it can only be compensated for by external controls that ensure the desired result at the end of a process. The risks that technology poses to businesses must be factored into every step of how that technology is constructed and used. Assets must be built to be resilient, and that resilience must be maintained over time and through changes to protect the data and services of the business for as long as an asset remains in production.
For critical infrastructure, avoiding misconfigurations means better understanding how the processes around their digital environment determine its health and security. It means validating these processes-- testing results against expectations to verify the outcome. It means automating processes as much as possible to move at speed and scale while minimizing human error. This is what can protect critical services from their number one threat, and any other approach will miss the root of the problem.
About the Author
Mike Baukes is a leader in the emerging cyber resilience and cyber risk field, empowering companies in regulated industries to make informed decisions regarding how to manage business risks created by technology. Mike is the Founder and Co-CEO of UpGuard, a Cyber Risk company based in Silicon Valley. In addition to his current role, Mike contributes to the Forbes Thought Leadership Council as a thought leader in Cyber Risk and Cybersecurity. He has also appeared in Dark Reading, Recode, TechCo and Insurance Business Magazine. Prior to founding UpGuard (previously known as ScriptRock) in 2012, Mike worked in in senior technology and M&A roles for financial institutions such as ANZ, Lloyds Banking Group, and Commonwealth Bank of Australia, leading teams in technology and strategy. Mike is also an active startup investor. Since 2013, Mike has been an investor and mentor for Startmate, Australia’s premier accelerator program. His most notable investments are Spaceship, SpaceX and Tesla, he is also an advisor to LifX.