Security is on the Verge of a Major Transformation

The connected world has put new requirements for agility and elasticity on development and architecture of applications. Breaking up the traditional monolith in small and nimble microservices and leveraging the container and orchestration capabilities of cloud native computing, the practices of developing and operating applications went through a real revolution to keep up with the demands of business and users.

Very much at the same time, networking has gone through its own major transformations resulting in decoupled control and data planes (SDN), a diversity in Network Operating Systems (NOS), and more flexibility to program the packet processing directly in the silicon. Open software and hardware initiatives driven by major cloud organizations resulted in disruption of the appliance model in favor of a disaggregated hardware and software model, a model which proved its worth in the compute side of the data center.

Security gear, however, remained largely unaffected by the application and network transformations and the appliance model in gatekeeper or cloud backhauling designs withstood the test of time. And this is about to change.

While the internet can adapt to higher throughputs for richer data and more connected devices, and applications can scale to handle those massive numbers of connections in hyperscale datacenters, the one thing the internet and the hyperscale datacenters cannot adapt to are the new requirements for lower latency.

Innovations in the connected world are craving for lower latency and near real-time communications and decision making. Autonomous cars, for example, are becoming aware of their surroundings. A car that knows the speed of the cars in front or when traffic lights are about to turn red is able to adapt its behavior. Not only will this result in less traffic accidents, reducing the number of traffic victims and jams, but it will also impact fuel or battery efficiency because of the anticipated coasting and accelerations eventually resulting in greener cities. Autonomous cars can act on their immediate surroundings using predictive algorithms that get input from connected devices in their immediate vicinity providing this information is accurate and timely.

[You may also like: Getting Ready for 5G & IoT]

Traffic information has a limited validity radius in terms of locality. A car in one part of the city does not care about the real-time state of traffic lights in other parts of the city, at least not if they are not on their navigation path.

Backhauling communications to a central cloud instance for all connected devices in a larger region has no value and will only increase the latency of responses and decisions, hence the need for local and decentralized communications and processing.

There is still an interest for central processing for the purpose of statistics reporting, billing or traffic predictions for navigation systems using big data and deep learning technologies, things that are too resource consuming for local processing. However, the near real-time decision making and communication of information can be done at the edge of the network.

[You may also like: Network Security in an App-Driven World]

IoT and immersive technologies are driving the processing to the edge, the physical location that connects things and people. In consequence, colocation and smaller data centers are built in regional locations to bring the compute and services closer to the customer. The edge computing is part of a lager distributed computing topology where data is aggregated in big data processing and storage facilities that are centralized in the cloud while local processing and real-time decision making is done in the distributed regional data centers or the edge cloud.

Very large cloud facilities hosting cloud services are taking advantage of locations that have cheaply available electricity, often harnessed from renewable energy and the presence of cool air outside to build more efficient cooling systems. Facebook, for example, decided 8 years ago to build a large data center 100km south of the Arctic Circle because of its access to renewable energy and the cold climate.

Smaller data centers are built closer to populated areas to enjoy the proximity of connected devices and people. They enable the edge cloud while being the preferred location for colocation and private cloud deployments as their customers’ engineers have easy access to those locations.

Mobile Edge Computing is an ETSI defined network architecture that enables processing close to the edge of the radio network. Traditionally, radio networks used to backhaul all mobile communications to a central point for routing, processing and security resulting in a predominantly north-south traffic pattern.

[You may also like: Edge Computing, 5G & IoT: Risks and Opportunities for Service Providers]

More recent architectures such as 5G significantly change this design and allow for east-west communications and edge compute by putting micro-scale data center facilities close to the radio antennas. This architecture provides low latency and near real-time decision making, a hard constraint for the earlier example of making cars aware and act on their surroundings.

Organizations are well aware and versed in securing online applications, whether these are hosted on-premise, in their private cloud, in a public cloud or multi-cloud. In each of these use cases, all traffic to the application is routed through a central stack of security functions that provide web applications and API protection, bot management, DDoS protection, etc.

The nature of many of these online applications is typically not impacted by variations in latency and as such, are not impacted by backhauling traffic through a central stack of security functions such as cloud managed security services. To increase scale and offload central instances, applications can leverage Content Delivery Networks to cache and offload requests for static and repeated content, but in general all query and data related transmissions are still going up to the central application.

Cloud native applications leverage the advantages of cloud computing through the use of microservices and containers. These applications can, to some extent, be protected by gateways or backhauled security services. Gateways will only ensure north-south security inspection while the inherent nature of microservices provides for an increase in east-west activity that can easily undermine the security of the application.

Netflix’s own red-team demonstrated this through an application layer DDoS attack where a limited amount of queries through the API gateway resulted in an avalanche of east-west queries inside the microservice architecture, eventually leading to exhaustion and overload of the microservices in the fabric.

[You may also like: Cloud-Native Application Security Challenges]

As microservice architectures are moving to encrypted and authenticated API requests between every node in the fabric, conveniently provided by newer Service Mesh architectures, all communications are turning dark and inspection of east-west traffic within the microservice fabric becomes nearly impossible.

Securing microservices in encrypted service meshes is more effective when the security component is moved down, closer to the application/service itself. Each POD in a Service Mesh consists of a sidecar proxy that provides, among other things, the routing and encryption of the communications inside the fabric. The sidecar reverse proxies traffic to the microservice running inside the same POD in clear text.

By consequence, the best placement for inspection of traffic inside a microservice fabric is between the sidecar and the service. Running as an independent proxy process between the sidecar and the service allows access to all clear text requests and responses, while not enforcing any dependencies on the sidecar or application as would be with sidecar plugins or run-time components that hook in the application.

[You may also like: 10 Commandments for Securing Microservices]

The nature of microservices is such that independent services are responding to client requests. Depending on the request, not every component is or should be aware of the client’s communication. Microservice architectures can scale and dynamically duplicate services to process higher demands, such as when load on a specific service increases. By consequence, a security component on one microservice will not have the full context of the communications between a client and the service mesh.

In order to be aware of the full context of a client’s interactions with a cloud native application, all information regarding the client’s interactions with every microservice in the fabric needs to be centralized. The security components that sit between the sidecars and the services in each POD form the data plane of a cloud native security solution while the centralized component, which could or should be a cloud native application in its own right, provides the control plane or the ‘brain’ of the solution.

Securing the edge is not much different than securing cloud native applications. Cloud native applications leverage containers and use orchestrators such as Kubernetes to provide dynamic management of container fabrics. Improving security, traceability and availability of the fabric can be achieved by integrating side-cars with their respective orchestrator such as Envoy and Istio.

The cloud edge and mobile edge leverage container technology, whether or not they are controlled by central orchestrators or the added convenience and security of service meshes. The key technology is, however, containers and the key design pattern is distributed. As such, the overall architecture to secure in cloud and mobile edge is one of distributed container services, which can very well leverage the components and security architectures of cloud native applications to secure and manage them at scale.

Network Function Virtualization and service chaining can be achieved by virtualization of physical appliances with added abstraction layers to provide programmatic configuration and control. A virtual appliance is still very much an appliance in its own right, even if delivered in a software package form factor. The virtual appliance has a data and a control plane of its own and, on top of the existing control plane interfaces, new APIs are added to provide programmatic access to query and change the state and behavior of the appliance’s data plane.

A virtual appliance, a software version of the physical appliance, is still very much a monolith: some configuration changes require reboots, and updates cannot be performed without down-time unless a cluster is introduced. The limitations of physical appliances still very much apply to virtual appliances as they were designed as an appliance in the first place. Applying the same paradigm as with cloud native applications, a network appliance can be approached as a monolith and exploded in its core data plane functions, exposing dedicated API. The control plane can be centralized and call on the APIs of the data plane functions that run as containers distributed in the fabric, whether this fabric is a data center network fabric or a distributed cloud/mobile edge.

Moving from virtual appliance to cloud native functions provides more scale and granularity. The footprint of a container is typically much smaller than the footprint of a virtual appliance. Starting a container is as fast as starting a process on the server, typically in the order of millisecond, while booting a virtual machine and initializing its data and control plane will take orders of magnitude longer. Longer start-up times limit the use cases for dynamically scaling in or out as well as flexibly setting up and destroying service chains on demand. As DevOps are very much accustomed to working with containers, the move from virtual appliances to containers should be easier in terms of integration and acceptance with the DevOps teams.

[You may also like: Application Delivery Challenges for DevOps]

The design of a security platform as a cloud native application allows one to create a central control plane, hosted in a public or private cloud, which controls the distributed data plane which can be protecting public cloud services, private cloud service, cloud or mobile edge services, etc. By design, the central control plane will aggregate all data and can provide big data intelligence and processing on the security all the different applications and services as a whole. It provides full visibility and deep intelligence centrally, while distributing the local processing and decision making for lowest latency and best security.

The hyper-cloud providers such as Facebook, eBay, Twitter, Google, Amazon, etc. have always been accustomed to being in full control of their compute software and hardware, up to building their own servers and even designing their own hardware components (Azure SmartNIC) and custom silicon (AWS Nitro).

After compute, the network was next and the router and switching market went through a transformation. The typically proprietary, single vendor, tightly integrated hardware and software boxes, commonly referred to middleboxes, had to make room for open, programmable devices where hardware and software became were decoupled. White- or brite-box hardware and network operating software (NOS) were decoupled and open source NOS’es as well as open source hardware, such as the Open Compute Project (OCP) joined established vendors in a new ecosystem that provide richer and programmable networks.

For high scale, low latency and high throughput applications, vendors provide SDKs that allow direct access to the silicon, and new abstraction interfaces, such as the Switch Abstraction Interface (SAI), OpenNSL , OF-DPA, etc., allow north-bound exposure for network functions, controllers and orchestrators.

Programmable silicon and new programming languages such as Barefoot’s P4 and Broadcom’s NPL allow third parties to customize even further the packet processing pipelines of the network hardware and perform flexible operations at wirespeed.

[You may also like: The Move to Multiple Public Clouds Creates Security Silos]

Certain NOS and hardware platforms add host and machine virtualization support and allow third parties to deploy and run containers or VMs on the control plane. Combined with programmatic access to control and data plane functions, third parties are able to enrich network devices with new wirespeed capabilities that provide scale and throughput with low latency.

These new capabilities allow cloud native network functions to run directly on network (hardware) devices and perform wirespeed processing of packets in data centers, cloud and mobile edges as well as on Universal CPEs (uCPE) in the enterprise premise, effectively extending the cloud native security platform and moving closer to the data path with additional hardware (servers) to be deployed.

With new requirements for connected applications come new architectures. Securing these new network architectures will require a transformation from a central to an edge driven security model, where security decisions are made locally in the edge while decisions are based on centralized intelligence in big data driven control planes.

Networking and applications have gone through the transformation cycle to adapt for new architectures through microservices, devops, decoupling the control and data plane, programmability, and disaggregation. The security world is now facing a very similar transformation and will leverage most of the technologies that enabled the network and application transformation.

Read Radware’s “2019-2020 Global Application & Network Security Report” to learn more.

Download Now

Lascia un commento