03Feb
CISO’s Guide to Microsegmentation
Uncategorized

ShieldX has assembled a set of guides for CISOs to help understand and deal with today’s security challenges.  They are designed to be ready quickly with a check list approach to help CISOs—and their teams—become more effective.  Next up? CISO’s Guide to Microsegmentation. (No reg required). 

In this guide, we explore how today’s data-driven, multicloud environment is an increasing target for hackers and micro-segmentation is increasingly regarded as a key defense mechanism against stealthy attacks and data breaches. It is the software-based extension of network segmentation but in a micro-segmented network, perimeters are fine-grained and applied at the workload level. Micro-segmentation is also based on the Principle of Least Privilege, which establishes that every module in the environment (such as a process, a user, or a program, depending on the subject) should only be able to access the information and resources necessary for legitimate purposes. It is the fine-grained control and the Principle of Least Privilege which make micro-segmentation far more effective as compared to traditional network segmentation. In a multicloud environment, this translates into each workload only being permitted to make connections necessary to accomplish its tasks and is typically implemented through basic ACLs (access control lists). 

Read More
27Jan
CISO’s Guide to DevOps: Learning to Cooperate with DevOps and Living to Tell the Tale ShieldX
Business

ShieldX has assembled a set of guides for CISOs to help understand and deal with today’s security challenges.  They are designed to be ready quickly with a check list approach to help CISOs—and their teams—become more effective.  Next up? CISO’s Guide to DevOps: Learning to Cooperate with DevOps and Living to Tell the Tale. (No registration required). 

In this guide, we explore how the DevOps paradigm presents a major dilemma to Chief Information Security Officers (CISOs) and their security teams. DevOps requires agility and, in fact, most areas of IT have become agile by automating in areas like service orchestration and continuous deployment. The problem? The rate of change in security is slow and many IT security processes are still manual. For example, before deploying a new application, a security team may require weeks to analyze new architectures and create, test and deploy new security controls. This inhibits technical and business innovation. 

Read More
20Jan
CISO’s Guide to Multicloud Security
Business

ShieldX has assembled a set of guides for CISOs to help understand and deal with today’s security challenges.  They are designed to be ready quickly with a check list approach to help CISOs—and their teams—become more effective.  Although they can be read in any order, we do recommend starting with the CISO’s Guide to Multicloud Security 

With this guide, we explore the central choice of securing multicloud environments: either adapt security to today’s business needs or try to retrofit existing security processes and toolsets. Many CISO’s want to maintain the practices and toolsets that they have built over the years, but unfortunately traditional agent and network tools are not suited for the scale, automation, or the architectures of multicloud. Failure to automate and streamline provisioning across multiple clouds complicates IT’s ability to deliver secure, agile services at the scale that organizations are demanding. As security teams struggle to keep up with threat containment across multicloud, it leads to initial compromises, which if undetected in application traffic (east-west) result in outages and more severe incidents. And, most importantly, security teams are hindered by the lack of a single tool that can provide both visibility and the enforcement of uniform security policies across multiple, cloud-specific architectures. 

To see the list of recommendations, just click here—no registration required. 

Read More
15Jan
Hyperautomation of AI-based Security in the Distributed Cloud
Technology

Each time a new breach hits the news, two interesting, but common points are noted. First, security vendors boast that their tools would have caught or prevented that incident. And second, security researchers analyze attack anatomies, then suggest that appropriate security controls were not in place, were misconfigured or no one in the victim organization paid attention to their output. These two talking points have always been intriguing for me and they beg the following questions: 

  • Did the victims have sloppy practices?  
  • If not, is it humanly possible to properly deploy, configure and monitor security controls all the time? 

If the answer is the former, then security is relatively straight forward to solve by creating or enforcing controls and processes these organizations already have in place. In addition, a review process would likely uncover additional controls required but not employed.  

However, if this were true, far fewer breaches would occur as this avenue is a low cost and easy to implement option. Thus, I would submit that it’s the latter point and as environments grow, they become much more complex. Add a cloud environment (public or private) where automation and orchestration drive the changes, and the chances of secure deployments further erode, greatly increasing the likelihood that security misconfigurations are a certainty. Under this premise, how would security need to behave to counter those effects?

Security must become HYPERAUTOMATED. 

Hyperautomation, as mentioned in Gartner’s Top 10 Strategic Technology Trends refers to the “application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyperautomation extends across a range of tools that can be automated, but also refers to the sophistication of the automation (i.e., discover, analyze, design, automate, measure, monitor, reassess.)”. 

If we apply those principles to security, Hyperautomation should augment human processes by placing security controls where and when they are needed, apply the right security policies, and draw correct conclusions from signals about the state of the attack on ones’ multi-cloud environments. 

  • Discover: within minutes, API-based discovery creates a map of all assets and their interactions. This discovery runs continuously and forms the basis of further realizations of customer security intents at cloud speeds, without manual intervention
     
  • Automate: the human intent about what to secure, and how, is captured via simple rules and again is consistently and continuously realized as changes occur. To augment the knowledge human operators may or may not have about their environment, proposed groups and policies are generated using ML. Unattended learning happens at user-configurable intervals of, for example, 15 minutes. Any changes in scale, size, location etc., are automatically accounted for. Another aspect of automation is that the system updates, upgrades and heals itself without disruption. That removes the need for maintenance windows and follows the continuous deployment paradigm
     
  • Secure: turning proposals into effective policies is also automated and can be optionally reviewed and edited manually. A set of six cloud-native ShieldX ESP security engines are automatically placed at ideal positions – optimized for the goldilocks zone – and continuously receive the intended policies without human intervention. They work in concert to block known bad signatures, detect anomalies using ML, and by surfacing “Indicators of Attack” that these engines use to identify needles in the haystack so human operators can focus on the most relevant signals first and can spend their time fighting back. 

Let’s contrast a system that supports Hyperautomation to traditional controls in a multi-cloud environment. For every infrastructure change, a security review and assessment if policies need to be adjusted is required. In some organizations, operationalizing a new firewall rule takes days to weeks. If an additional instance of a security control is required, that roll-out can take even longer. However, in typical orchestrated infrastructures, we see hundreds of changes happening daily. Waiting for security to react would create discontinuities. As a result, one of the following scenarios happens: 

  • Security teams are bypassed and not even informed about changes 
  • Security teams are informed but have no control to hold off changes, then they must struggle to bolt visibility on as an afterthought 
  • Security teams are involved in the change process but can only put controls in place at certain coarse perimeters that misalign with application boundaries. Those controls’ policies are then left wide open to not cause any traffic interference 

Recently, promises for a solution to this problem originated from security vendors with agent-based solutions. Let us take a quick look at how agent-based approaches fare compared to a Hyperautomated system. Interestingly, those vendors are advocating slight variations of the discover, automate and secure methodology to highlight specific features and hide the incomplete nature of their approaches. 

  • Agent-based Discover: anything an agent-based system can discover requires that it install and run an agent. That means everything not running that agent will not be discovered, leaving potential blind spots wide open. This results in longer lead time to get to basic visibility, and requires integration with DevOps processes to achieve continuity. Anything that cannot run agents or is overlooked, stays invisible. Pervasively rolling out agents takes several weeks at least. 
  • Agent-based Automate: automation is mostly limited to ACL security policy and does not include grouping. This leaves the problem of properly classifying workloads for security purposes with the security team. Can you think of an individual in your organization who would have comprehensive knowledge about which label to put on each and every workload? Most likely this task keeps many individuals busy for weeks to months. 
  • Agent-based Secure: most agent-based solutions are supplying only basic security controls; some are limited to ACLs, which certainly help reduce the attack surface. However, the initially mentioned security researcher will have no chance of catching an attack or breach over ports that need to stay open for business reasons. 

To recap, the traditional security approach carries the highest risk, and agent-based solutions offer better segmentation. However, they still incur major limitations in security richness and demand tons of manual effort to roll it out. 

Each of the compared alternatives to a Hyperautomated system leads to the initially mentioned state wherein security controls are either not in place or are not configured properly for breaches to be detectable. No matter how perfect a security practice in an organization is, and no matter how much is spent on security controls, that gap cannot be closed with an inappropriate approach. 

 

Read More
05Dec
SWIFT Customer Security Program for 2019
Uncategorized

The Customer Security Program (CSP) is a framework launched by the Society for World Interbank Financial Telecommunication (SWIFT), originally in 2016. The “programme” can be broken down into three key objectives:

  • Secure your environment
  • Know and limit access
  • Detect and respond

Obviously, these are fairly high-level bullets and therefore leave a lot to interpretation, but SWIFT built into the CSP a couple dozen controls (27 of them, to be exact), some of them mandatory, some of them merely advised. Originally, the arrangement called for member organizations to self-attest to their use of these controls as of the end of last year. 94% of organizations met this deadline and, impressively, this meant that 99% of SWIFT network traffic fell under the controls.

An update from earlier this year means that organizations are again asked to self-attest their compliance by the end of the year. Because some of the controls were updated, this may mean rethinking how it is that your organization, if it is a SWIFT member, achieves its compliance.

At ShieldX, we think the way to protect a modern data center is to have the security architecture be specifically designed for the attributes of such a data center: containerized workloads, elastic and dynamic allocation of workloads, and controls to prevent attacks from pivoting along the axis of east-west traffic within the center. This may sound obvious, but we meet a lot of organizations who are trying to create a static perimeter in the cloud with a stack of virtualized next-gen firewalls. Maybe this works within limits, but it definitely doesn’t scale well and it also comes with all the security risks that accompany the (nearly always) resultant flat network.

As we’ve noted elsewhere, ShieldX takes an approach based on microsegmentation and the application of deep packet inspection. ShieldX Elastic Cloud Security uses microsegmentation and a container-based, microservices architecture to replace the tiered zones and the monolithic firewalls that organizations have traditionally used with mixed success. With ShieldX, you still have zones, but they are automatically generated and maintained, individually defined for separate business applications, and scaled dynamically on a per-zone basis. Within these elastic zones, ShieldX offers full packet inspection equivalent.

When it comes to the SWIFT requirement for “detecting and responding,” virtual patching is a critical part of any current defense posture. You can use a vulnerability scanner to find problems in your network and then, in theory, you could take the scanner report, assemble a team of experts, and manually generate the policies needed to provide virtual patches to your highest-priority vulnerabilities. But the expense and time intensity of this process run aground on the difficulties of too many patches and too many (dynamic) workloads.

We think ShieldX makes an enormous amount of sense when tackling the SWIFT CSP. Learn more about this in our data sheet.

 

 

Read More
04Dec
PCI DSS 3.2.1 updates and ShieldX
Uncategorized

While the May 2019 changes made to take the PCI DSS standard from version 3.2 to version 3.2.1 were largely clarifications of the existing standard, any change in the standard is an occasion to take another look at how your organization is achieving compliance.

PCI sets out a number of requirements, not all of which are addressed by capabilities ShieldX provides (we don’t address physical access to servers, to take one obvious example). Here are a handful of requirements where ShieldX and the microsegmentation it provides are particularly relevant (or check the more complete listing here):

Requirement 1: Install and maintain a firewall configuration to protect cardholder data

Not last but least, there’s this sub-item about intrusion detection, which is something ShieldX builds in at a per-container granularity (unique in the industry as far as we’re aware):

11.4 Use intrusion-detection and/or intrusion-prevention techniques to detect and/or prevent intrusions into the network. Monitor all traffic at the perimeter of the cardholder data environment as well as at critical points in the cardholder data environment, and alert personnel to suspected compromises.

Cloud security

How is ShieldX relevant to the above requirements? Almost across the board, it has to do with building a data center architecture that uses not only containers for server workloads, but also uses containers to implement security services (making them more granular to deploy with greater agility). ShieldX creates tiers within a network that are elastic with the dynamic allocation of workloads.

So when it comes to Requirement 1, maintaining a firewall, ShieldX gives you a next-gen firewall like capabilities performing deep-packet inspection on a per-workload basis. When it comes to the requirements for data protected in storage and restriction of access to a need-to-know basis, access to tiers is limited by business policy, not by IP address—which some solutions do by using server Access Control Lists (ACLs).

One other point that has to be made has to do with scoping. The PCI Security Standards Council information supplement “Guidance for PCI DSS Scoping and Network Segmentation” makes it clear that organizations should give serious consideration to which elements of their business systems are within scope and which properly should not fall under PCI DSS compliance requirements. As the guidance notes, “when properly implemented, network segmentation is one method that can help reduce the number of system components in scope for PCI DSS.”

To state the obvious, reducing the segments of the network where PCI compliance is required inherently reduces the scope and complexity of PCI assessments.

Bottom line, we think ShieldX makes an enormous amount of sense for PCI DSS compliance regimes. Learn more about this in our recent writeup.

 

 

Read More
03Dec
ShieldX Partners with AWS
Business

New Amazon VPC Ingress Routing—What Does it Mean for Security?

We welcome the introduction of Amazon Virtual Private Cloud (Amazon VPC) Ingress Routing, a new solution from Amazon Web Services (AWS) designed to allow companies like ShieldX to simplify the integration of security appliances designed to monitor and block network traffic without the need to apply special routes or forego details such as public IP address routing between subnets. (For more, Amazon’s blog is here).

One of the biggest questions facing every senior security professional is figuring out how to secure enterprise networks as they fundamentally change over time. This requires a level of flexibility and scale heretofore unknown in the security industry. Traditional appliance-based solutions were built monolithically and are not well suited to cloud architectures. And new cloud friendly products do not provide the depth of security required to protect environments from the variety of attacks typically deployed.

As noted recently in CSO Online:

Contrary to what many might think, the main responsibility for protecting corporate data in the cloud lies not with the service provider but with the cloud customer. “We are in a cloud security transition period in which focus is shifting from the provider to the customer,” Heiser says. “Enterprises are learning that huge amounts of time spent trying to figure out if any particular cloud service provider is ‘secure’ or not has virtually no payback.”

Eventually, security professionals will find themselves asking:

  • How did we become totally marginalized as the businesses just went around us and built whatever they wanted directly in the cloud?
  • What does security entail in this new cloud architecture and can I secure critical assets as they move to the cloud?
  • Can I achieve the agility promised by the cloud, while ensuring proper visibility and control over the digital assets?
  • How do you automate enforcement of security policy as apps change without human intervention?
  • Do any of my traditional security tools provide value in the new cloud environment?
  • How can I enforce scalable and flexible access control in virtualized and cloud deployments (microsegmentation)?

Amazon VPC Ingress Routing:  What does it mean?
With AWS’s new announcement, the CISO’s job just got a whole lot easier. Moving to the cloud means you can easily cover the two major traffic concerns that inhibit public cloud adoption for data centers. How?

Amazon VPC Ingress Routing is a service that helps customers simplify the integration of network and security appliances within their network topology. With Amazon VPC Ingress Routing, customers can define routing rules at the Internet Gateway (IGW) and Virtual Private Gateway (VGW) to redirect ingress traffic to third-party appliances, before it reaches the final destination. This makes it easier for customers to deploy production-grade applications with the networking and security services they require within their Amazon VPC.

With ShieldX, enterprises can protect East/West traffic flows.  Most enterprise traffic, as you move to the cloud, has become East/West traffic. Analysts report East/West traffic (traffic within the data center and traffic between data centers) represents nearly 85 percent of total traffic in flow. This represents a gigantic blind spot in which basic visibility, compliance and enforcement become impossible.

With ShieldX, users can overcome significant management and security challenges, by adopting a full range of security controls to provide users the ability to view traffic, identify anomalies and block attacks traversing both north/south and east/west all from a single management console.

Here’s a video overview illustrating how ShieldX works to secure AWS.

So, today’s news from AWS should be widely welcomed by the broad security community.  We can finally embrace cloud security and economics at once.

 

 

Read More
28Aug
Beyond Native Cloud Security Controls
Uncategorized

One thing you tend to get with a move to the cloud is a flat network. You have a virtual network perimeter, but inside the network, you’ve got no points of control unless you put them there by hand. If you logically group your workloads along the lines of an old-school tiered architecture, you can put in virtual appliances such as next-gen firewalls, but you have to do this manually and it’s not a setup that really delivers on your need to scale workloads dynamically. At the end of the day, this means security remains a drag on the business and no one wants to be “the guy” who slows things down.

This was all spelled out in a great article that recently appeared on SearchSecurity.com. In the article, Dave Shackleford spelled out his laundry list of what’s wrong with a non-cloud approach to securing cloud infrastructure:

  1. Flat networks abound
  2. No native monitoring of east-west traffic
  3. Limited routing control
  4. Network access control is often primitive
  5. Inline intrusion detection are difficult to implement
  6. Content-based inspection capabilities are scarce

He goes on to point out that it’s possible to remedy some of these ills using some of the native capabilities in cloud environments, such as security groups in AWS and network security groups in Azure. While I agree that it’s possible to tighten up a network this way, there are some important ways in which this approach falls short when even mildly stressed. The primary issue is complexity—lots of workloads and lots on interconnections among them—and this has to be countered with automation. You simply must have automation to handle the process of configuring the microsegments that connect all the workloads on your network.

Bottom line: you need to get the logic of your security controls expressed directly in the interconnections of your network architecture. Again, you could in theory do this by hand using the tools I’ve mentioned, but if your infrastructure is of any size or complexity at all, you really need the next level of tools to automate this. Not only that, but you need these tools to dynamically follow the changes in server workloads on your network on an ongoing basis and readjust policies and microsegments on the fly.

As ShieldX is deployed, it automatically creates a summary of your workload assets and then uses a machine learning algorithm to discern what kinds of processes are running on each workload. If your organization uses containers and has developed a discipline of tagging your workloads, these tags are used to directly and automatically deploy policies to govern the microsegmentation of your network. Otherwise, the grouped workloads are presented to you in a user interface that makes it easy to express policies for kinds of workloads.

From all of this, logical tiers are created and dynamically updated so that the tiers continue to govern communications among workloads as workloads scale up or down within various tiers. This elastic tiering is unlike anything offered by any other vendor and—another unique characteristic—this is done without the need to deploy agent software onto each workload.

Why does it matter whether software agents are used? For one thing, in legacy situations it sometimes just isn’t possible. Perhaps more importantly, this runs counter to the very idea of containerization, where you want one service or function encapsulated per container (and often isn’t possible even if you don’t care about the aesthetics of it). Either way, you wind up with a microsegmentation capability that leaves critical workloads out of the equation.

ShieldX doesn’t use agents. It also doesn’t rely on the manipulation of ACLs, the problem being that ACLs are an inherently IP-address-centric approach. More agile microsegmentation is possible using approaches such as Cisco Underlay Networks and Azure User-Defined Routes.

To conclude, the ideal multi-cloud solution would:

  • Automate and continuous discovery of assets.
  • Autogenerate security policy.
  • Auto deploy controls to fulfill dictated policies.

Without this level of automation, security teams continue to exist in a perpetual hamster wheel. The cloud—along with cloud native solutions—bring the promise of automation and economics that traditional vendors have failed to leverage.  In the old days, IT teams manually managed networks but eventually migrated to SDN.  Now, with ShieldX, its security can enjoy the same level of agility.

Read More
16Aug
Why I Joined ShieldX
Business

We recently announced that I joined ShieldX Networks as CEO.

Like many job seekers, I relied on friends and trusted colleagues to inform my decision. Mike Fey first turned me onto ShieldX (he recently outlined his reasons for investing in ShieldX and it is a must read). Mike encouraged me to invest alongside him. Consequently, when the ShieldX team started to look for a CEO to partner with the founding team, a much more direct level of involvement began to surface. It did not take long for me to recognize that ShieldX is where I wanted to be, if the founders and board would have me. While there were several reasons this opportunity was so compelling, it came down to four main drivers.

Market opportunity.
The ShieldX Elastic Security Platform could well be THE enabling security offering with the ability to both enable the “migration to” and “security promises” of microsegmentation in the era of cloud computing. The move to the cloud is fundamentally changing how IT and networking are done, how applications are developed, where security risks need to be mitigated and how security needs to be inherently and elastically applied. To date, many enterprises have begun their data center transition to the cloud, but during this transition, hackers and malicious insiders have uncovered and exploited blind spots—particularly along the emerging East-West data center. We, as an industry, have spent a few decades focused on securing North-South network traffic boundaries; but as networks became flatter, larger and more dynamic, a growing attack surface within arose that led to an ever growing number of security breaches due to attacks spreading on the East-West axis.

What if we could offer improved network security by using elastic security software, which offers visibility, policy generation for microsegmentation and a rich set of dynamic security controls to enable fully automated security in this new world? What if we could simplify achieving and reporting on compliance in the cloud? For CISOs, the current set of market options means buying multiple point products, some of which are being shoehorned into solving a problem they were never designed to solve.

Further, what if we could offer security without increasing customer overhead? ShieldX does this with technology that was designed from the ground up to serve in this elastic environment where it once was impossible to define your network security posture. As Mike Fey stated in his blog, “East west security is more important than north south.”  ShieldX can—and will—protect all data center workloads in the future. As anyone interested in deploying a Zero Trust effort will understand—ShieldX is in the thick of an important market.

Technology
Today, the majority of approaches to microsegmentation require agents. Not ShieldX. Instead, ShieldX pioneered an application layer security approach that brings visibility to traffic patterns enterprises haven’t seen before the arrival of multi cloud.  And it is not just visibility. ShieldX’s approach also brings application layer threat prevention.  Remember what IPS brought to on-prem network perimeters? ShieldX does this in your cloud.  Being agentless allows for robust functionality like virtual patching, for example virtually patching cloud workloads which combats the new trend in ransomware, where the new target is unpatched workloads/VMs in your data center and cloud. And then there’s automation.  One of ShieldX’s customers used to have a firewall analyst update policies taking up four hours (!) daily. Our automated policy enforcement dynamically assigns policies based on predefined criteria aligned to your business process, enabling this valuable resource to be redeployed into more strategic activities.

Team
I knew Ratinder and Manuel professionally and by reputation from McAfee. Both are innovators and famous within the industry for good reason. When stars aligned and Ratinder and the ShieldX board were looking for a CEO partner, it was hard to not get excited. The team that built ShieldX is hard to duplicate. Few innovators could build a platform that promises to upend security as enterprises move to the cloud. Also, if one looks at the other people associated with the company, be it investors, board members or advisors, they would have to say this is truly a “hall of fame” caliber line up.

Competitive landscape
Today, if you want security in the cloud you have to choose between virtual firewalls, agent-based technology or go with cloud-native capabilities. Virtual firewalls suffer two fundamental problems—they don’t scale elastically in the cloud and they create way too much administrative overhead in an ever-changing cloud environment. If you require more TLS decryption in an environment for inspection, for instance, you need to buy more licenses of the firewall to achieve the required TLS decryption; even if you don’t need other features included in that license. Worse, because of the extra traffic incurred by virtual firewalls, you’ll end up paying excessive CPU overhead costs and you’ll have to hire additional network security staff to administrate ACLs in your ever-changing cloud environment. Cloud native providers supply basic security capabilities, but they are hardly best of breed, they too require way too much overhead to constantly re-configure, they only support their own platform, and lack the advanced application layer security capabilities security teams require. And many new entrants in this space require agents. The drawbacks of agents are pretty well known but you can always ask one self-answering question:  Is it OK in a production environment to deploy agents to workloads without extensive QA and compatibility testing? Perhaps the biggest deficiency of all the above approaches is their lack of automation. By contrast, ShieldX installs quickly and brings fast time to value.  More importantly, our software is architected to provide elastic scalability and makes policy and control management dramatically simpler.  At the end of the day, ShieldX lowers your operational costs to enable microsegmentation and lateral movement protection. Bottom line: ShieldX brings an unfair advantage to the market.

I encourage you to try ShieldX. Three of our customers not only influenced my decision to join, but also echoed my sentiments in these detailed reviews including this compelling testimonial from Alaska Air:

We switched to ShieldX because traditional firewalls are more expensive, and they require you to take all of your traffic outside of your virtual environment to inspect it and then return it back to the virtual environment. ShieldX also enables us to migrate to cloud environments faster.

Read More
30Jul
Capital One Breach—Its Cloudier than you Think
Business

Looks like another breach—but this one continues a recent trend we’ve been seeing on the rise.  Namely, the attacker took advantage of poorly or mis-configured firewall to access cloud-based data.  Some claim it was a web application firewall, other reports aren’t clear.  Regardless, as we move into multi cloud, this problem is becoming more and more pervasive.

Capital One was, like many companies, is stuck in a time warp.  Historically security was done mostly by fortifying the perimeter of the network, assuming that the adversaries could be kept out by locking a single gate or chokepoint.  More and more, we learn that this architecture is no longer effective, as there is an incongruity between the physical data center boundary and virtual perimeters. Those new perimeters can take up any size and shape and change at cloud speeds making it impossible for traditional security to follow—especially traditional firewalls. Worse, the security controls offered by cloud vendors are weaker than traditional options and are often no match against sophisticated attacks.  In this case, the attacker was a former AWS employee who likely knew the ins and outs of the fragmented, cloud-based network.

What are the lessons?

  1. Without auto-generation of policies, those dynamic environments will always have sub optimal configurations on the firewalls. Today, many enterprises employ people whose sole function is to update firewalls policies.  Spending hours every day—often a full time role!—security teams have people who constantly update firewall policies.  When you move the cloud, this isn’t scalable, its impossible for humans to keep up.
  2. Its not just the automated security policy generation—you also need automated control deployment. Policies are only as good as the controls that drive them.  Even if you get policies under control, the dynamic nature of the cloud still means the controls must adapt at the same, instant speed.
  3. Intention, intention and intention.  Automation isn’t enough if you can’t tell your system what you want it to do.  When you input a destination into Waze, hiccups happen.  Does Waze say, “sorry, you can’t go there anymore.”?  No, it adjusts,  The same flexibility is required in security: continues and automated transformation of the security intent into security controls eliminating configuration errors over time.
  4. East West is the new North South.  Tracking lateral movement in a fragmented cloud environment is more critical than ever.

You’ve moved to the multi cloud—welcome to the new reality.  One of the biggest questions facing every senior security professional is figuring out how to secure enterprise networks as they fundamentally and constantly change over time. This requires a level of flexibility and scale heretofore unknown in the security industry. Traditional appliance-based solutions were built monolithically and are not well suited to cloud architectures. And new cloud friendly products do not provide the depth of security to protect environments from the variety of attacks typically faced.

So what can you do?  Check out our CISO’s Guide to Multi Cloud Security which provides more than a few clues.

Read More

About Author

Ratinder Ahuja

Ratinder Ahuja

Founder & CEORatinder leads ShieldX and its mission as its central pivot point, drawing from a career as a successful serial entrepreneur and corporate leader, bringing with him his unique blend of business acumen, industry network and deep technical knowledge.
+ READ FULL BIO

Test Drive ShieldX START NOW