Viewing posts categorised under: Technology
CISO’s Guide to DevOps: Learning to Cooperate with DevOps and Living to Tell the Tale ShieldX

ShieldX has assembled a set of guides for CISOs to help understand and deal with today’s security challenges.  They are designed to be ready quickly with a check list approach to help CISOs—and their teams—become more effective.  Next up? CISO’s Guide to DevOps: Learning to Cooperate with DevOps and Living to Tell the Tale. (No registration required). 

In this guide, we explore how the DevOps paradigm presents a major dilemma to Chief Information Security Officers (CISOs) and their security teams. DevOps requires agility and, in fact, most areas of IT have become agile by automating in areas like service orchestration and continuous deployment. The problem? The rate of change in security is slow and many IT security processes are still manual. For example, before deploying a new application, a security team may require weeks to analyze new architectures and create, test and deploy new security controls. This inhibits technical and business innovation. 

Read More
CISO’s Guide to Multicloud Security

ShieldX has assembled a set of guides for CISOs to help understand and deal with today’s security challenges.  They are designed to be ready quickly with a check list approach to help CISOs—and their teams—become more effective.  Although they can be read in any order, we do recommend starting with the CISO’s Guide to Multicloud Security 

With this guide, we explore the central choice of securing multicloud environments: either adapt security to today’s business needs or try to retrofit existing security processes and toolsets. Many CISO’s want to maintain the practices and toolsets that they have built over the years, but unfortunately traditional agent and network tools are not suited for the scale, automation, or the architectures of multicloud. Failure to automate and streamline provisioning across multiple clouds complicates IT’s ability to deliver secure, agile services at the scale that organizations are demanding. As security teams struggle to keep up with threat containment across multicloud, it leads to initial compromises, which if undetected in application traffic (east-west) result in outages and more severe incidents. And, most importantly, security teams are hindered by the lack of a single tool that can provide both visibility and the enforcement of uniform security policies across multiple, cloud-specific architectures. 

To see the list of recommendations, just click here—no registration required. 

Read More
Hyperautomation of AI-based Security in the Distributed Cloud

Each time a new breach hits the news, two interesting, but common points are noted. First, security vendors boast that their tools would have caught or prevented that incident. And second, security researchers analyze attack anatomies, then suggest that appropriate security controls were not in place, were misconfigured or no one in the victim organization paid attention to their output. These two talking points have always been intriguing for me and they beg the following questions: 

  • Did the victims have sloppy practices?  
  • If not, is it humanly possible to properly deploy, configure and monitor security controls all the time? 

If the answer is the former, then security is relatively straight forward to solve by creating or enforcing controls and processes these organizations already have in place. In addition, a review process would likely uncover additional controls required but not employed.  

However, if this were true, far fewer breaches would occur as this avenue is a low cost and easy to implement option. Thus, I would submit that it’s the latter point and as environments grow, they become much more complex. Add a cloud environment (public or private) where automation and orchestration drive the changes, and the chances of secure deployments further erode, greatly increasing the likelihood that security misconfigurations are a certainty. Under this premise, how would security need to behave to counter those effects?

Security must become HYPERAUTOMATED. 

Hyperautomation, as mentioned in Gartner’s Top 10 Strategic Technology Trends refers to the “application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyperautomation extends across a range of tools that can be automated, but also refers to the sophistication of the automation (i.e., discover, analyze, design, automate, measure, monitor, reassess.)”. 

If we apply those principles to security, Hyperautomation should augment human processes by placing security controls where and when they are needed, apply the right security policies, and draw correct conclusions from signals about the state of the attack on ones’ multi-cloud environments. 

  • Discover: within minutes, API-based discovery creates a map of all assets and their interactions. This discovery runs continuously and forms the basis of further realizations of customer security intents at cloud speeds, without manual intervention
  • Automate: the human intent about what to secure, and how, is captured via simple rules and again is consistently and continuously realized as changes occur. To augment the knowledge human operators may or may not have about their environment, proposed groups and policies are generated using ML. Unattended learning happens at user-configurable intervals of, for example, 15 minutes. Any changes in scale, size, location etc., are automatically accounted for. Another aspect of automation is that the system updates, upgrades and heals itself without disruption. That removes the need for maintenance windows and follows the continuous deployment paradigm
  • Secure: turning proposals into effective policies is also automated and can be optionally reviewed and edited manually. A set of six cloud-native ShieldX ESP security engines are automatically placed at ideal positions – optimized for the goldilocks zone – and continuously receive the intended policies without human intervention. They work in concert to block known bad signatures, detect anomalies using ML, and by surfacing “Indicators of Attack” that these engines use to identify needles in the haystack so human operators can focus on the most relevant signals first and can spend their time fighting back. 

Let’s contrast a system that supports Hyperautomation to traditional controls in a multi-cloud environment. For every infrastructure change, a security review and assessment if policies need to be adjusted is required. In some organizations, operationalizing a new firewall rule takes days to weeks. If an additional instance of a security control is required, that roll-out can take even longer. However, in typical orchestrated infrastructures, we see hundreds of changes happening daily. Waiting for security to react would create discontinuities. As a result, one of the following scenarios happens: 

  • Security teams are bypassed and not even informed about changes 
  • Security teams are informed but have no control to hold off changes, then they must struggle to bolt visibility on as an afterthought 
  • Security teams are involved in the change process but can only put controls in place at certain coarse perimeters that misalign with application boundaries. Those controls’ policies are then left wide open to not cause any traffic interference 

Recently, promises for a solution to this problem originated from security vendors with agent-based solutions. Let us take a quick look at how agent-based approaches fare compared to a Hyperautomated system. Interestingly, those vendors are advocating slight variations of the discover, automate and secure methodology to highlight specific features and hide the incomplete nature of their approaches. 

  • Agent-based Discover: anything an agent-based system can discover requires that it install and run an agent. That means everything not running that agent will not be discovered, leaving potential blind spots wide open. This results in longer lead time to get to basic visibility, and requires integration with DevOps processes to achieve continuity. Anything that cannot run agents or is overlooked, stays invisible. Pervasively rolling out agents takes several weeks at least. 
  • Agent-based Automate: automation is mostly limited to ACL security policy and does not include grouping. This leaves the problem of properly classifying workloads for security purposes with the security team. Can you think of an individual in your organization who would have comprehensive knowledge about which label to put on each and every workload? Most likely this task keeps many individuals busy for weeks to months. 
  • Agent-based Secure: most agent-based solutions are supplying only basic security controls; some are limited to ACLs, which certainly help reduce the attack surface. However, the initially mentioned security researcher will have no chance of catching an attack or breach over ports that need to stay open for business reasons. 

To recap, the traditional security approach carries the highest risk, and agent-based solutions offer better segmentation. However, they still incur major limitations in security richness and demand tons of manual effort to roll it out. 

Each of the compared alternatives to a Hyperautomated system leads to the initially mentioned state wherein security controls are either not in place or are not configured properly for breaches to be detectable. No matter how perfect a security practice in an organization is, and no matter how much is spent on security controls, that gap cannot be closed with an inappropriate approach. 


Read More
Capital One Breach—Its Cloudier than you Think

Looks like another breach—but this one continues a recent trend we’ve been seeing on the rise.  Namely, the attacker took advantage of poorly or mis-configured firewall to access cloud-based data.  Some claim it was a web application firewall, other reports aren’t clear.  Regardless, as we move into multi cloud, this problem is becoming more and more pervasive.

Capital One was, like many companies, is stuck in a time warp.  Historically security was done mostly by fortifying the perimeter of the network, assuming that the adversaries could be kept out by locking a single gate or chokepoint.  More and more, we learn that this architecture is no longer effective, as there is an incongruity between the physical data center boundary and virtual perimeters. Those new perimeters can take up any size and shape and change at cloud speeds making it impossible for traditional security to follow—especially traditional firewalls. Worse, the security controls offered by cloud vendors are weaker than traditional options and are often no match against sophisticated attacks.  In this case, the attacker was a former AWS employee who likely knew the ins and outs of the fragmented, cloud-based network.

What are the lessons?

  1. Without auto-generation of policies, those dynamic environments will always have sub optimal configurations on the firewalls. Today, many enterprises employ people whose sole function is to update firewalls policies.  Spending hours every day—often a full time role!—security teams have people who constantly update firewall policies.  When you move the cloud, this isn’t scalable, its impossible for humans to keep up.
  2. Its not just the automated security policy generation—you also need automated control deployment. Policies are only as good as the controls that drive them.  Even if you get policies under control, the dynamic nature of the cloud still means the controls must adapt at the same, instant speed.
  3. Intention, intention and intention.  Automation isn’t enough if you can’t tell your system what you want it to do.  When you input a destination into Waze, hiccups happen.  Does Waze say, “sorry, you can’t go there anymore.”?  No, it adjusts,  The same flexibility is required in security: continues and automated transformation of the security intent into security controls eliminating configuration errors over time.
  4. East West is the new North South.  Tracking lateral movement in a fragmented cloud environment is more critical than ever.

You’ve moved to the multi cloud—welcome to the new reality.  One of the biggest questions facing every senior security professional is figuring out how to secure enterprise networks as they fundamentally and constantly change over time. This requires a level of flexibility and scale heretofore unknown in the security industry. Traditional appliance-based solutions were built monolithically and are not well suited to cloud architectures. And new cloud friendly products do not provide the depth of security to protect environments from the variety of attacks typically faced.

So what can you do?  Check out our CISO’s Guide to Multi Cloud Security which provides more than a few clues.

Read More
PART II: AWS and Azure–Cloud Security Isn’t True Security

This is a continuation of from a previous blog.

Cloud security isn’t true security.

Digital transformation, the cloud, and the increased popularity of DevOps, may have sped up your business practices and driven innovation. Unfortunately for network and security operations teams, the combination of these factors means a significant increase in the resource protection needed. Securing the above can become a complex, multi-vendor, multi-technology, and hybrid-cloud-environmental issue.


With limited resources and manual processes, it is difficult for cloud-based IT organizations to keep up with demand and document changes. While both Amazon and Azure provide too many services (Wikipedia lists these for your general Azure interest) to detail in a short security article, it suffices to say that with each cloud capability your company utilizes, the importance of securing your presence only increases. Now we’ll go into the most obvious reason security breaches are increasing on all cloud platforms: misconfiguration.


Cloud-provided security is a miss on misconfigs

With financial and time pressure only on the increase for CISOs, there’s a real temptation to “lift and shift,” or drop assets into a cloud without considering the security of their configuration and relationships to on-premises assets. Not the best idea.


We won’t mince words here: Misconfigurations are THE BIGGEST driver behind breaches today. The skyrocketing rate of poorly configured cloud infrastructure is the driver behind the major source of these breaches. According to Computing Cloud, problems arising from this one issue jumped by 424% this year, accounting for nearly 70% of compromised records over the year.


Computing Cloud’s 2018 Review notes that 86% of organizations cite data breaches and loss as the primary reason they hesitate to adopt the cloud. Unfortunately, a simple misconfiguration, even a failure to set a single option in a company’s cloud service, can create a major security risk. Take problems at Equifax, Cathay Pacific and others as an indicator of what’s to come when you leave the door open behind you.


So Retro-active!

Cloud companies are learning, but slowly, while customers are left to piece together their security after the fact. Some enterprise customers who already use Azure for example, may extend active directory (AD) controls into the cloud, so they can define “new” security controls using their existing AD controls. AWS is adapting to the new security customer in working toward enterprise-readiness in their network monitoring. They have a ways to go though. Over time both platforms have added a few resources, but neither offer a comprehensive security stack.


The problem is that the Cloud providers are securing their networks not your applications. They are providing insight and monitoring of suspicious activity on their network but not on your databases. This is like a security guard that monitors the streets for suspicious activity when the actual suspicious activity happens at the homes they’re breaking into, not on the streets.


It gets even worse though. Because it’s still up to you to properly use all the security products they are making available AND use them correctly. Meanwhile they are still learning how to properly provide them themselves. Add to that how each Cloud provider does it differently and you’re looking at a security resource problem as in you won’t find enough security resources to help you with securing all the different types of Cloud environments.


Now please ask yourself and your team if these add-ons, workarounds, and limited cloud-provided security insight are good enough and consider alternative solutions. Some multi-cloud security vendors, ShieldX included, give customers a single viewpoint or “pane of glass” to define security policies and control. A micro-services driven approach with custom-built orchestration for discovery and scaling makes these solutions very effective in cloud migration.


Other features provided by ShieldX are important when it comes to cloud security readiness: the ability to micro-segment and do DPI for “threat protection” on the traffic to detect any threats within the E/W (core) network.

Read More
PART I: AWS and Azure–Cloud Security Isn’t True Security

Like taking flight, most enterprise CISOs begin (and remain) building their security structure while their assets on the ground, before transitioning a number of them to AWS or Azure cloud storage and apps. And rather than building on a cloud foundation from the very beginning of their business model, the most likely scenario for our readers is to have a fair amount of data centers stationed on their own servers, even after their move to their cloud(s).


However these assets are positioned, assessment of your own security posture should take into account their configuration as well as their location—and cover everything in between. Below we go into the different considerations for secure AWS and Azure storage, as well as the importance of a holistic security plan for whatever your organization has decided to shift—or keep on premises.


The basics

In a general sense, AWS and Azure have grown more similar than apart. AWS was initially built to hold Amazon’s assets and information. Their data center was then converted to use for customers. So from day one, the security architecture was not built to allow customer control in several aspects, let alone the microsegmentation that you can only get from a third-party provider.


As far as Azure is concerned, they launched in 2010 but are now a Fortune-500-favorite to the cloud game, starting as an internal project for building and deploying their own applications.


Add-ons add up

Neither AWS nor Azure features security as a pillar on their website, and there’s a reason for that. If there’s anything you take away from this article, it is vital to remember that when it comes to security, anything put on the Cloud is a shared responsibility model.


If you build an application, do anything on your own that holds customer data, or write code—that’s all your security responsibility. It works well only as long as every user has done their bit.


According to their marketing, both the Azure Active directory and AWS Directory Service profess their “reliability” and “scalability” and touch on security features that can basically be categorized into:

  • Visibility
  • Threat protection
  • Security assessment
  • Cloud configuration assessment, and
  • Policies and constraints, including varied microsegmentation


One security researcher summarizes that, though he prefers them for data protection, his main challenge with AWS is that “they don’t offer control over the subnet level. For a security provider to mitigate that issue we need to look at every machine’s traffic.”


We encourage you to visit both websites or this handy comparison guide for specifics, but let’s move forward. According to Azure’s Advanced Threat Protection offering, as an add-on security feature, they profess to:

  • Identify suspicious user and device activity with both known-technique detection and behavioral analytics
  • Analyze threat intelligence from the cloud and on-premise
  • Protect user identities and credentials stored in Active Directory
  • View clear attack information on a simple timeline for fast triage
  • Monitor multiple entry points through integration with Windows Defender Advanced Threat Protection

But comparison of in-cloud offerings is not the takeaway point of this article. Other articles do that. Our point is this: We believe any cloud’s security description should not satisfy you. You should leave a clouds’s website with multiple questions and assumptions. Cloud security isn’t true security. Never believe the hype.


Take the above bullets. You may ask yourself, How does Azure identify suspicious user activity via analytics, when a user could be monitoring on-premise apps before breaking in without suspicion? How do they analyze threat intelligence on premises? Would that require timely installation and automatic updates? Yes, Azure monitors multiple entry points—cloud entry points. Is every department of your company using the same cloud login? Is that a good thing?


So let’s pretend, with all your open-ended questions, you’ve opted to purchase their security plan. But you need more. To secure on-premise apps you’ve gone with an agent-based solution. A few other departments have added on a patchwork of virtual appliances to supplement their data security. Like many companies, you may throw consistency out the window and inadvertently end up using multi cloud/platform approaches even across divisions. Suddenly, in Q3, the CFO calls you in a panic, asking why vendors are emailing and asking for overdue licensing and maintenance fees.


We’d like to offer you a little reminder. Rather than relying on multiple add-on security providers, with an agentless network provider like ShieldX, you are consolidating and applying one set of controls across multiple platforms. It’s this problematic nature of cloudy security issues which is why ShieldX devised the solution in the first place. But enough sales talk.


PART II coming tomorrow.

Read More
Agentless Micro Segmentation: How Does ShieldX Do It?

At a recent trade show, I was asked: “How does ShieldX implement agentless micro segmentation?”  Not coincidentally, Gartner recently published a research note (login required) and called ShieldX out for its agentless technology, correctly calling us “microservices-based micro segmentation.”

How do we do it? ShieldX deploys a network-based architecture where we insert in multi-cloud environments to collect and inspect infrastructure traffic for visibility, analytics and security control, instead of relying on end points (agents). We implement agent-less network traffic inspection using an overlay network. Insertion is handled by Segment Interfaces (SI) microservice. In VMware ESXi environment, we use SI on trunk tap for Tap Mode and Layer2 VLAN Bridging to SIs for Inline and Microsegmentation Mode. In Azure environment, we use Flow Inspectors (FI), placed inline as a NAT provider for N/S traffic and route traffic between workloads via User Defined Routes (UDR) for E/W traffic. In AWS environment, we use the FI as a NAT provider for N/S traffic and use Network encapsulation and route entries for E/W traffic. This ensures we are placed in the “Goldilocks Zone”: not on the system (too close) where it makes deployments, upgrades, testing and maintenance more difficult, but also not removed from the network at the perimeter only (too far), where traffic cannot be rerouted and steered in a timely and effective way. This makes ShieldX the “just right” solution for all of your micro segmentation and cloud security needs.

But what is the most important impact?  Friction-less deployment and maintenance with no additional testing at every upgrade.  I always say the proof is in the pudding: in their review of ShieldX, Alaska Airlines noted: “We evaluated vArmour and Illumio. They didn’t meet our requirements. ShieldX is a superior solution.”  We deploy in just a few hours or less, and immediately begin to provide a full set of security controls and automated micro segmentation.  Longer term, managing perpetual flux on an ever changing network means—with machine learning—making changes to a micro segmented network is easy.  Again, noting our customer: “The Adaptive Intention Engine is fantastic. It allows us to develop security policies using the language of our internal customers. It’s machine-learning applied to security workflows. That allows us to much more easily construct the policies that will protect those workflows.”

Read More
VMWare Security Analysis

Data center virtualization was originally designed to improve the utilization rates of computing, networking and storage assets. As the early pioneer of such technologies, VMWare grew to become the dominant vendor of data center virtualization software. Unfortunately, cloud providers’ popularity and rapid feature expansion have not matched the limited security solutions they offer along with their data packages.

Unaware customers who migrate their assets via providers like VMWare, without a holistic inter-cloud security strategy in place, are left both insecure and financially vulnerable.

While every cloud provider should be considered an analog, in this advisory we will address VMWare specifically as both a trendsetting example and leading cloud provider. Here we provide users with five reasons to consider an inter-cloud security approach when those assets are in play.


A successful software-defined data center implementation should support scaling of computing resources

This allows for business units to add new applications rapidly and with enhanced DC security. This should be enabled in a VMWare-powered data center. But this is not a feature VMWare offers. A barely hidden secret in IT corners is that many previous loyalists have chosen to convert to AWS and prompted a rapid rise in demand for cloud computing and IaaS.

A comparison of the growth in AWS-based virtualization and VMWare’s on-premise virtual servers illustrates the movement toward AWS.

Solution 2013 2014 2015 2016 2017
AWS 3108 4644 7880 12,219 17,459
VMWare 5150 6040 6650 7090 7920(*)

(All figures in $mil.)

*Re-statements to account for Dell acquisition


The result has been that enterprises now own two separately virtualized assets. One is in their data centers with VMWare, and the other is in AWS VPCs and/or Azure Vnets. The public cloud has delivered economic benefits for them as well as more flexible control over their resources.


VMWare’s virtual networking and security toolkit are not built to maximize security

While VMWare has robust server virtualization offerings, its security features are simply too underdeveloped for the majority of customers’ needs.

To supplement them, customers seek alternatives with Cisco ACI and a multi-vendor mix for their security needs. Meanwhile, the cumulative cost to VMWare customers keeps rising. Gartner has seen consistent adoption of these offerings over the past year, and Cisco now reports over 3,500 paying ACI customers. (Gartner MQ on Data Centers)


VMWare never quite ‘got’ public cloud standards

VMWare initially took an adversarial stance towards their competitors. Of course, these were public clouds, most notably Amazon’s AWS. Not only did VMWare downplay the compelling benefits of AWS, but more importantly they did little to match their capabilities or provide alternative, legitimate pathways for customer workload migration.

Then they followed up with their own public cloud solution which experienced a myriad of growing pains. Their vCloud Air was sold to OVH in May 2017.


Add-ons add up

After launch when it was forced to reconsider its position, VMWare offered its cloud customers an option of deploying its virtualization toolset (VMWare cloud on AWS) on top of the already virtualized AWS cloud (functionality illustrated by VMPro).

The following table quantifies the cost of running VMWare Cloud on AWS compared to native AWS virtual servers, VMWare providing no additional benefit.

Note the additional cost requirement to heavily invest in VMWare’s private data center in order to access preferred pricing in AWS.


VMWare on-premise license requirements Yearly cost of 10 VMWare servers on AWS (1) Yearly cost of 10 AWS EC2 instances without VMWare overhead (2)
100 CPUs of vSphere Enterprise Plus $467,883 $193.20
100 CPUs of vSphere Enterprise Plus & 10 CPUs of NSX $441,890 $193.20
100 CPUs of vSphere Enterprise Plus & 20 CPUs of NSX and 20 VSAN licenses $389,903 $193.20

(1)(VMWare data procured from their blog.)

(2)(AWS pricing is based on a reserved instance standard for a 3-year term as derived from their pricing sheet)


VMWare has not delivered on its promise of a robust security platform

When it comes to segmentation and threat prevention across the data center and public cloud, its customers are still waiting for answers. VMWare has underdeveloped inter-cloud security offerings—and they are hampering customer adoption of true multi-cloud infrastructure.

Let’s go back to the very beginning of connective security, starting with virtual servers. Virtual servers naturally gave rise to virtual network switches, which connected them within a single physical server and across their data center. The servers needed to be segmented and inter-server traffic inspected for threats.

Initially, VMWare offered the VMSafe API to allow partners to bring their expertise to bear in order to keep this virtual network safe for their customers. But after getting their partners invested in this approach, VMWare abruptly canceled their API effort in favor of internally developed techniques. The outcome was that the virtual network suffered in its security posture compared to what was delivered on the physical network. This limited security foundation is unfortunately coupled with an aging virtual network and repackaged as an “inter-cloud” offering called NSX-T. NSX-T is not lacking in bold claims.

While the NSX-T design guide claims to provide “micro-segmentation for AWS workloads,” it does not offer any threat mitigation beyond the original NSX offering, which is limited to working on top of AWS with little support for other leading clouds such as Azure and GCP.

The security offered by NSX-T is based on basic firewall functionality for N-S traffic and  coupled to the segmentation built into each vNIC.

NSX-T does not begin to address the fundamental requirements of a multi-cloud security solution. The security policy must be expressed as an intention to be applied not to VMs, but to operations from application workloads. The solution must work seamlessly across all major clouds.

Customers who have integrated their assets with VMWare have been struggling to absorb and deploy this limited model as they look to mitigate inter-cloud security challenges.

Take a hint from a proprietary major utility company, which had to deploy virtual firewalls in addition to NSX to protect their virtualized data center. Operationally, these dual security frameworks were challenging to maintain.

The customer was unsure, after all their efforts, whether they had the protection they needed. When they moved their workloads to AWS, the same data center security implementation could not be deployed there. The increasing opex and capex burdens, and reduced confidence in security, set back their timeline for moving additional workloads to the cloud.

VMWare’s capitulation to AWS has resulted in a new marketing approach wherein VMs from the data center can migrate to AWS. As noted earlier, this doubles their customers’ spend and reduces their flexibility. Additionally, this migration is currently supported on AWS, but not on Azure or Google Cloud.

Meanwhile, VMWare has taken to the airwaves stating that there are too many security offerings in the marketplace. The implication is that customers should turn to VMware for a simplified and seamless security umbrella.



Enterprise customers are attuned to VMware messaging and some have absorbed its technology and marketing pitch. While VMWare has robust server virtualization offerings, its security solutions are inadequate in relation to its features.

When what is being provided does not meet their fast-changing usage needs, customers either turn to complex and costly add-on solutions, or are otherwise hampered in their search for workload security across multi-cloud platforms.

Moving forward, enterprises should consider a true SDDC strategy that offers overarching protection for cross-platform assets. Don’t put blind faith in security for your entire company as provided by one of several cloud platforms in use.

Read More
The ShieldX Serverless Security Philosophy

Serverless architecture, also known as Function as a Service (FaaS), presents new challenges for securing applications built using this architecture.  FaaS is an event-driven architecture in which a small piece of code is executed on an API call or message.  Various cloud vendors support multi-language (Java, Javascript, python, C#, etc.) FaaS to make it very easy for developers to use.  Additionally, FaaS is attractive for economy and maintenance reasons because the cost is based on the execution time and users don’t have to worry about regular maintenance of web-servers or shared resources. But the architecture introduces challenges in terms of how and where to enforce security controls.

Read More
Why You Need Advanced Micro-Segmentation to Combat Advanced Attacks

Just as we learned thirty years ago, access control alone is not a sufficient defense, by itself. Or, to put it another way, it’s déjà vu all over again! Just as the access control provided by those first firewalls in the 1980s was not enough to secure the perimeter, micro-segmentation based on access control alone does not adequately solve the problem of lateral movement inside the multi-cloud.

Read More

About Author

Ratinder Ahuja

Ratinder Ahuja

Founder & CEORatinder leads ShieldX and its mission as its central pivot point, drawing from a career as a successful serial entrepreneur and corporate leader, bringing with him his unique blend of business acumen, industry network and deep technical knowledge.