Each time a new breach hits the news, two interesting, but common points are noted. First, security vendors boast that their tools would have caught or prevented that incident. And second, security researchers analyze attack anatomies, then suggest that appropriate security controls were not in place, were misconfigured or no one in the victim organization paid attention to their output. These two talking points have always been intriguing for me and they beg the following questions:
- Did the victims have sloppy practices?
- If not, is it humanly possible to properly deploy, configure and monitor security controls all the time?
If the answer is the former, then security is relatively straight forward to solve by creating or enforcing controls and processes these organizations already have in place. In addition, a review process would likely uncover additional controls required but not employed.
However, if this were true, far fewer breaches would occur as this avenue is a low cost and easy to implement option. Thus, I would submit that it’s the latter point and as environments grow, they become much more complex. Add a cloud environment (public or private) where automation and orchestration drive the changes, and the chances of secure deployments further erode, greatly increasing the likelihood that security misconfigurations are a certainty. Under this premise, how would security need to behave to counter those effects?
Security must become HYPERAUTOMATED.
Hyperautomation, as mentioned in Gartner’s Top 10 Strategic Technology Trends refers to the “application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyperautomation extends across a range of tools that can be automated, but also refers to the sophistication of the automation (i.e., discover, analyze, design, automate, measure, monitor, reassess.)”.
If we apply those principles to security, Hyperautomation should augment human processes by placing security controls where and when they are needed, apply the right security policies, and draw correct conclusions from signals about the state of the attack on ones’ multi-cloud environments.
- Discover: within minutes, API-based discovery creates a map of all assets and their interactions. This discovery runs continuously and forms the basis of further realizations of customer security intents at cloud speeds, without manual intervention
- Automate: the human intent about what to secure, and how, is captured via simple rules and again is consistently and continuously realized as changes occur. To augment the knowledge human operators may or may not have about their environment, proposed groups and policies are generated using ML. Unattended learning happens at user-configurable intervals of, for example, 15 minutes. Any changes in scale, size, location etc., are automatically accounted for. Another aspect of automation is that the system updates, upgrades and heals itself without disruption. That removes the need for maintenance windows and follows the continuous deployment paradigm
- Secure: turning proposals into effective policies is also automated and can be optionally reviewed and edited manually. A set of six cloud-native ShieldX ESP security engines are automatically placed at ideal positions – optimized for the goldilocks zone – and continuously receive the intended policies without human intervention. They work in concert to block known bad signatures, detect anomalies using ML, and by surfacing “Indicators of Attack” that these engines use to identify needles in the haystack so human operators can focus on the most relevant signals first and can spend their time fighting back.
Let’s contrast a system that supports Hyperautomation to traditional controls in a multi-cloud environment. For every infrastructure change, a security review and assessment if policies need to be adjusted is required. In some organizations, operationalizing a new firewall rule takes days to weeks. If an additional instance of a security control is required, that roll-out can take even longer. However, in typical orchestrated infrastructures, we see hundreds of changes happening daily. Waiting for security to react would create discontinuities. As a result, one of the following scenarios happens:
- Security teams are bypassed and not even informed about changes
- Security teams are informed but have no control to hold off changes, then they must struggle to bolt visibility on as an afterthought
- Security teams are involved in the change process but can only put controls in place at certain coarse perimeters that misalign with application boundaries. Those controls’ policies are then left wide open to not cause any traffic interference
Recently, promises for a solution to this problem originated from security vendors with agent-based solutions. Let us take a quick look at how agent-based approaches fare compared to a Hyperautomated system. Interestingly, those vendors are advocating slight variations of the discover, automate and secure methodology to highlight specific features and hide the incomplete nature of their approaches.
- Agent-based Discover: anything an agent-based system can discover requires that it install and run an agent. That means everything not running that agent will not be discovered, leaving potential blind spots wide open. This results in longer lead time to get to basic visibility, and requires integration with DevOps processes to achieve continuity. Anything that cannot run agents or is overlooked, stays invisible. Pervasively rolling out agents takes several weeks at least.
- Agent-based Automate: automation is mostly limited to ACL security policy and does not include grouping. This leaves the problem of properly classifying workloads for security purposes with the security team. Can you think of an individual in your organization who would have comprehensive knowledge about which label to put on each and every workload? Most likely this task keeps many individuals busy for weeks to months.
- Agent-based Secure: most agent-based solutions are supplying only basic security controls; some are limited to ACLs, which certainly help reduce the attack surface. However, the initially mentioned security researcher will have no chance of catching an attack or breach over ports that need to stay open for business reasons.
To recap, the traditional security approach carries the highest risk, and agent-based solutions offer better segmentation. However, they still incur major limitations in security richness and demand tons of manual effort to roll it out.
Each of the compared alternatives to a Hyperautomated system leads to the initially mentioned state wherein security controls are either not in place or are not configured properly for breaches to be detectable. No matter how perfect a security practice in an organization is, and no matter how much is spent on security controls, that gap cannot be closed with an inappropriate approach.