5 Steps to Success
Let Machine Learning Simplify Your Journey Toward Micro–segmentation
Working from home has fueled a new streak of attacks on datacenters as exemplified here and here. Initial breaches often involve VPN, RDP or web servers and provide entry points for subsequent lateral movement penetrating deeper into IT infrastructure. It has been proven time and time again that even the most formidable perimeter defenses are going to succumb to ever more sophisticated adversaries with seemingly unlimited time and resources.
An exciting trend I have observed is that an increasing number of IT leaders understand the need to supplement their perimeter defenses with additional security controls that are capable of containing breaches from within. The widely discussed method of micro–segmentation has been proven effective in reducing attack surfaces drastically by allowing only required connections. This hampers the ability for attackers and their automated tools to run reconnaissance and subsequently spread laterally. But two misconceptions keep hindering adoption of products introducing micro–segmentation:
- It is thought to require a long process to operationalize
- It is perceived as complex to manage
Both lead to the assumption that micro–segmentation is hard, which can be true depending on the technology behind the product. Imagine a solution requiring IT teams to place agents on each asset just to subsequently require the same assets to be classified manually. Given those conundrums, sometimes enterprises choose to live with the risk of being the next victim. To resolve this paradox, it required a mix of capable engineering hearts, machine learning and a slick GUI workflow. I am happy to herewith introduce the ShieldX methodology with which micro–segmentation will turn into a breeze.
Step 1: Discover connections – your decision what infrastructure needs to be discovered prompts ShieldX to automatically ingest pre-recorded flow logs or rolling out traffic TAPs. Passive visibility provides connectivity information as well as potential risks. The ShieldX Policy Generation ML engine uses that data to visualize discovered business applications, common services, and their interactions. It continuously adapts to changes and proposes which assets belong to which application tier based on behavior. Observed connections turn into proposed ACL allow rules supplemented by comprehensive layer 7 security controls. Thanks to automation, all that is happening in minutes without the need to touch or classify assets.
Step 2: Implement applications – In this step, the proposed business applications from step 1 are turned into implemented applications. This means someone from the security or application team reviews an application proposal and with a single click approves it. All applications can be implemented with a single click at once or the team can decide to review one-by-one. The system offers an intuitive workflow that generates groups and ACLs that would otherwise have to be inquired from application owners. Even though this step creates actual policy, traffic is not (yet) enforced.
Step 3: Forward testing – important for rarely occurring connections is this forward testing step. The system runs a simulation of what would happen in case blocking was enabled. Any deviations from earlier learnt behaviors are recorded as violations and can be dealt with by either allowing or denying them in the future. The system then either adds the violation to the set of allowed ACLs or just remembers to never allow it in the future.
Step 4: Micro–segmentation – application components are put into their own segments with a single click. This allows the single-pass chain of ShieldX microservices to start looking at network traffic and to enable layer 7 security controls. If desired, users can start blocking malware, malicious or certain classes of URLs, exploit attempts and more. ACLs are still set to allow all traffic.
Step 5: Enforcement – with this final click to enable enforcement, the default ACL rule is switched from allow to deny. All communication is being locked down to an effective, tested, zero trust policy and any connections not explicitly allowed will be blocked.
With this methodology, the amount of effort spent on operationalizing micro–segmentation is minimal. The learning time in steps 1 and 3 can be tailored to the customer environments and can run autonomously. Step 2 can be a single click or a comprehensive review. Even if the review option is chosen, the system provides information that would otherwise take months to discover manually. Steps 4 and 5 are single clicks as well and thus the whole sequence requires only minimal time, while the system can stay in learning mode for days, weeks or months, fully unattended. Important to note is that the steps can be repeated. This allows for selective micro–segmentation of the most critical assets and avoiding over–segmentation. And did I mention that applications can span multiple clouds without demanding changes to the applications?