Thursday, May 23, 2019

Cloud Native way of security





How to secure cloud in cloud native era

Introduction
When it comes to cloud security there are approaches that rely on same tools that worked for on premise infrastructure. Those approaches are commonly referred as on premise security posture. We are seeing that more companies are making cloud native applications. For cloud security, there is a need to have a shift from on premise security posture to a cloud native approach to security. We need to take an approach that relies more on native cloud primitives offered by the cloud provider.

Cloud native way of security
Cloud infrastructure has become the de facto infrastructure for hosting applications and workloads that enable a business to provide value. Cloud has proved to be a value multiplier and more and more businesses are trying to tap into the cloud value. In terms of the applications, companies are shifting to cloud native approach for application architecture. This shift towards cloud native applications is the right approach as this helps companies realize the cloud value. The cloud native way of security is taking the cloud native approach for the applications and applying it for cloud security. For example, the cloud native application is architected to use cloud platform as a resource and not just a hosting environment, similarly cloud native security is tapping into cloud platform for providing security posture.

Note: The left side of the cloud native security shots the conceptual view of security in depth. The implementation architecture might show each layer as a separate entity at the same level.

Why we need Cloud native way of security
It is all about the understanding and the approach that is the result of the understanding. There are two mindsets when it comes to cloud security. One mindset is about looking at what has worked at on premise data center and replicating it to cloud. The other mindset is looking at cloud as a platform for resources that provides components for cloud security. The on premise mindset typically involves looking the resources as virtual machines that are part of a network and are joined to a domain. The security tools that have evolved over the years for on premise resources essentially addresses the same scheme. These security tools involve (and not limited to):

  •       Relying on host operating systems to provide security by managing and restricting incoming and outgoing traffic flow to/from ports. A good example is a windows firewall. 
  •       Recording all the traffic that is flowing in and out of the network and then analyzing the traffic to flag any suspicious traffic flow. This also give us idea that security actions are typically thought of as reactive action and not proactive actions.
The above two points are just the two examples of security tools that are employed by on premise approach. The intention here is to point out the mindset where security is delegated to tools and not built in the resources/workloads that are part of an on premise establishment.
The reason that those tools were designed that way was because the cloud providers did not offered alternatives to support the cloud native mindset. If we go back a couple of years ago and image that we had to host a cloud native application, we still had to rely on the tools that worked before.
Now the time has changed. The cloud providers have addressed the lack of security tools as part of the cloud infrastructure. We have entered an era where native cloud security tools or cloud primitive tools have evolved to fully support the cloud native security. The need is to create cloud based solution architectures that leverage the cloud primitive security capabilities. For example:
·       Instead of relying on windows firewall, NSG (in case of Azure) should be employed. That will also mean to evolve the network architectures that designed for clouds to support that. 
·       Instead of using security tools that record all the traffic, tools like Azure Sentinel should be employed.


Case for Cloud native way of security
Shift-Left Security: With the cloud native way of security approach there will be more push towards achieving security using the shift-left approach. This means that instead of security being addressed something as wrapper, it will be addressed as a feature that is "baked in" into the architecture from the beginning. That is the very definition of the Shift-Left approach.
Security by platform: As more and more systems are being automated and moved to cloud, more and more threats are emerging. The threats are becoming multidimensional in nature. The cloud-native way of security is better equipped to cope with the present and emergent threats.
Defense in Depth: Cloud native way of security allows multiple mitigations applied against threats. If we replace the cloud native way of security with on premise way of security, then we would need to have multiple layers of security that we have to stand up and maintain either by ourselves or the vendors.
Scale: Cloud native way of security is platform based, so it inherently scales within the platform. To understand this, imagine we are back in 2000s. The data that was being generated at that time was not much in scale what we have now. For that time the on premise infrastructure was provisioned to cater the needs of that footprint of data. Now we are in an era of “big data” and to support the scale requirement for that enormous amount of data, we would need more dedicated resources.
Focus on business value: With Cloud native way of security, the focus of business is shifted from security and business to only business. This also means that the computing base that a business is responsible for has been reduce. This makes the systems more secure as there is less computing base that a business needs to protect.
Containerization (aka Kubernetes) revolution: With the advent of containerization more and more applications have shifted towards cloud native architecture. This shift means that the classical cloud security model needs a shift as well towards cloud native way of security.


Example Components of Cloud Native Security Architecture
Azure Sentinel
Each cloud provider has developed its own set of tools that can be defined in the realm of cloud native way of security. From Azure’s side, the tool that implements the cloud native way of security is “Azure Sentinel”. Azure Sentinel is a cloud-native SEIM that leverages AI for threat protection. The way Azure Sentinel works is that it uses Data Connectors. The Data Connectors hook into various Azure services such as Firewall. The Data Connectors then feed in data to Azure Sentinel. Azure Sentinel runs the security related tasks and then shows the security landscape onto a dashboard. More information on Azure Sentinel can be found here.

DDoS Protection
This is a platform level service that Azure offers. As mentioned earlier, the cloud providers are now answering to the call of providing cloud native security tools. Azure DDoS is one of the newer service that helps against the DDoS targeting cloud resources.

Firewalls
Firewalls is an Azure service that helps protect the virtual networks. Since it is a managed service offering there are advantages that come with it. These advantages include real time analysis of traffic and then feeding in the data to Azure Sentinel.

Web Application Firewall
For the applications that are hosted on Azure App services, the web application firewall is essential for it security. For a truly cloud native application that is hosted on App service, just adding the Web Application Firewall will make it cloud native for security.

Azure NSG
NSG is a primitive (native) service offering that helps us implement traffic rules for incoming and outgoing traffic. The beauty of NSG is that it acts like windows firewall but can be applied to various resources such as subnet. In case of subnet if NSG is added then the NSG rules will be enforced on all the VMs and other resources that are part of that particular subnet.

Conclusion
In this article we have looked at the two approaches to cloud security. As we have seen from the tools and the conceptual architecture there are Managed Services that are offered by the cloud providers. Using Managed Services for cloud security infrastructure is the step in the right direction to achieve cloud-native security.

Thursday, May 16, 2019

Kubernetes multi-service deployment and leveraging mounted volumes


How to deploy multiple services in a Kubernetes cluster and how to inject environment configuration settings on deployment time.


Introduction

The purpose of this article is to show how we can create a companion application to support the API and for the most part how to use mounted volume feature in Kubernetes to allow configuration information that needs to be passed to an application deployed on Kubernetes cluster. This article also services as the last part of the series of articles talking about leveraging the Kubernetes, docker and ASP.NET core to create a truly cloud native application.

Why

An obvious question is that why we care about mounted volume. The reason we care about mounted volume is because this how we can pass environment and configuration information to an application that is hosted by Kubernetes cluster. When creating an ASP.NET core application or any application that has configuration, we come across the question of configuration and how to handle it. We live in a world where application does not need to know which environment it is being hosted in. It just need to focus on what that application is designed for (creating value). The configuration information should be stay in the environment and the application when it is being provisioned should pick those configuration information and run with it. Mounted volume allows us to store environment specific information that the applications can use.

Solution

So far we have been successful in creating a Kubernetes cluster that is hosted an ASPNET Core Api. The next logical step is to add a front end application that will use the Api. We can host that frontend application anywhere but since we are using Kubernetes then why don’t we leverage Kubernetes and use the same cluster that we had created before.  So what this article will detail is how we go from single service Api deployed on a Kubernetes cluster to multi service (Api and Web App) deployed on Kubernetes cluster.
The following image shows the transition from initial state to the final state.


Source code for the API app can be found at: https://github.com/mnabeel/mac-api
Source code for the Web app can be found at: https://github.com/mnabeel/mac-app
Once we deploy the API app and Web and we query the services on the Kubernetes cluster by running “kubectl get services” command, we get the following:

The API app and deploying it to Kubernetes is pretty straight forward. It has self-contained database and now external dependencies. The case is different for the Web App as it needs to know the reference (endpoint url) to the API app. Since we are using Kubernetes to deploy the API app and Web App, we run into the issue that the URL of the API app can change whenever Kubernetes restarts the pod or we have to resurrect the Kubernetes cluster or services.
One solution could be that every time we see a change in the API endpoint, we update the source code for the Web App, recreate the docker image, push the docker image to the docker hub and then recreate Kubernetes deployment and expose the service.
The other solution is to use mount volume as secret to supply the url of the API app whenever there is a change in the API endpoint. And that is the purpose of this article.  To accomplish this will need to create a store in our web app that will carry the base URL of the API app. This is done be creating a folder called “secrets” and the creating a file called “appsettings.secrets.json” as shown below:

You can see that the ConnectionString element is pointing to a “localhost”.  When it is time to deploy will create “appsettings.secrets.json” as a secret and use it when we are creating a deployment for the Web app.  Let us take a look at the Deployment.yaml which is also part of the github repo for the web app mentioned above.

As mentioned above in the highlighted red square we have information about the volumeMounts and secrets. This is who we are declaring where the secrets will be mounted on the deployed service by Kubernetes. Important thing to note here is the secretName field. It is important to use the same value for secretName field that we will use in the step to create a secret in Kubernetes cluster. The value for secretName is : “secret-appsettings”
In order for us to deploy the Web App service in Kubernetes, we also need to create a Service.yaml file. Here is the screenshot:

We have the source code for the WebApp (github link provided in previous sections), we have the Deployment.yaml that tells us how the deployment will be created, and we also have the service.yaml that will tell the Kubernetes how to create the service from the deployment. That completes the prerequisites for deploying Web app to Kubernetes. We can now proceed to deploy. Here are the steps.
1.     Create secret:
“kubectl create secret generic secret-appsettings --from-file=./appsettings.secrets.json”
Running the above kubectl command will create the secret. One thing to note here is that we are using the same name for secret as we have in the Deployment.yaml which is “secret-appsettings”.
2.     Confirm that new secret has been created by running the following command:
“kubectl get secrets”

3.  Create Kubernetes deployment by running the following command:
“kubectl create –f deployment.yaml”
4.  Create Kubernetes service by running the following command:
“kubectl create –f service.yaml
That will deploy the web app in your Kubernetes cluster.


How to change the secret

Follow the following steps:
1.     Update the appsettings.secrets.json file from the runtime location of your command.
2.     Delete the old secret by running the following command
5.     Recreate the new secret (which will carry the new endpoint as it will reference the updated “appsettings.secrets.json” file) by running the following command:
“kubectl create secret generic secret-appsettings --from-file=./appsettings.secrets.json”

6.  Update the Web App deployment which will automatically push the change in the URL to Web App. This can be done many ways. What I have found easy to follow is to first delete the old deployment and then running the creating of deployment again as shown in the following command that was mentioned in the previous section:
“kubectl create –f deployment.yaml”

Here is how the application looks like:



This is basically a simple front end that displays the member information from the api and allows the user to enter a new member.  On the above screenshot, you will notice that I am printing some information about the host, environment, and the endpoint url to the API that the frontend app is using. This way I know what endpoint app is hitting for data. Notice that URL value is not showing http://localhost:8000 but it is showing this value as http://35.202.95.20 . This means our secret injecting on deployment time using mounted volume worked.

Conclusion

In this article we have seen how we can deploy two different applications in the same Kubernetes cluster and how we can use mounted volume to inject configuration information to an application that is deployed on a Kubernetes cluster. This article is the last part of the series articles that talk about deploy ASP.NET Core app using containerization technology and then leveraging Kubernetes as a container orchestration engine.