In some hosting environments/configurations, the network traffic between pods (applications) may traverse the public Internet. As a result, I'd like to secure the communication between the pods.
For example, I have the following structure:
Service_A - edge service in my product and provides access to my API to external users via public IP.
Service_B and Service_C - microservices that has ClusterIP(s).
As I understand I can secure traffic user<-> Service_A by using Ingress controller with ssl certificate.
But how should I secure Service_A<->Service_B communication? Create additional ingress services to wrap microservices? Are there any best practices for such cases?
One detail: microservices use gRPC for communication.
Thanks
A simple, generic solution that I like is to run a reverse-proxy (such as nginx) in each pod. All of your app containers will listen on localhost or unix sockets, and the ssl proxy will terminate external HTTPS connections. This makes it easy to audit your SSL config across all your apps, since every connection is terminated by the same nginx config.
Certificate distribution is the primary challenge with this approach. For external services, you can use LetsEncrypt to generate certs. For internal services, you'll need a private CA that is trusted by your ssl-proxy. You can mount the CA cert in a config-map at runtime. You'd then generate a cert per app or per-pod, and mount that as a Secret consumed in the ssl-proxy container.
If this sounds like too much work, you might want to look at https://github.com/istio/istio, which aims to automate the cluster CA role, and the provision of per-pod certificates.
Related
If I have two services ServiceA and ServiceB. Both are of ServiceType ClusterIP, so if I understand correctly both services are not accessible from outside of the cluster.
Do I then need to setup encryption for these services or is in-cluster-communication considered as secure?
Do I then need to setup encryption for these services or is in-cluster-communication considered as secure?
The level of security you want to use is up to you. In regulated industries, e.g. in banks, it is popular to apply a zero trust security architecture, where no network is considered secure - e.g. in this case, it is common to use mutual TLS between applications within the cluster - with both authentication, authorization and encryption. On Kubernetes its common to use a service mesh like e.g. Istio to implement this.
In-cluster networking is typically its own local network, it is up to you to consider that secure enough for your use-case.
If I have two services ServiceA and ServiceB. Both are of ServiceType ClusterIP, so if I understand correctly both services are not accessible from outside of the cluster.
Commonly, yes. But there are now common with load balancers that can route traffic to applications with Service type ClusterIP. This depends on what load balancer / Gateway you use.
I have a pretty simple setup with an Application Gateway (AG), that sends traffic to a virtual machine running Ubuntu. The AG is loaded with an SSL certificate. The VM is set up to only allow incoming traffic from the AG, but it's an HTTP connection. This works, but I want to secure the traffic between my VM and AG. I can't find any relevant settings or documentation for this however.
How do I encrypt traffic between an Application Gateway and Virtual Machine? I considered a private link to at least force traffic over the Azure network, but private links only support PaaS products, where a VM is IaaS.
I assume your use the private IP of your VM in the backend settings of your Application Gateway. If so, this means that the traffic stays within your VNET and thus on the Microsoft network and also within the same region. You do not not need something like Private Link here.
So the only thing you could potentially do is to SSL-enable the endpoint on the VM and use an encrypted HTTPS connection between AppGW and your VM.
you have to do the same thing as with the api-gateway, load a certificate into de service deployed in the virtual machine and expose the API of this service using SSL protocol so the communication will be encrypted using that certificate.
The way to do it is different depending on which technology you are using to deploy your service. For example, if you are using spring-boot you can see how to do it here
https://www.baeldung.com/spring-boot-https-self-signed-certificate
However, you can use mutual-tls if you want that the only service that could connect to your VM's deployed service is the AG.
https://developers.cloudflare.com/access/service-auth/mtls
I need to host the frontend and backend parts of my application on ingress kubernetes. I would like only the frontend part to be sent to the backend part, even though both are available in ingress under one host (but a different path). Is it possible to set something like this in a kubernetes cluster? So that no other applications can send requests to the backend part. Can you do something like this with kubernetes security headers?
Within the cluster, you can restrict traffic between services by using Network Policies. E.g. you can declare that service A can send traffic to service B, but that service C can not send traffic to service B. However, you need to make sure that your cluster has a CNI with support for Network Policies. Calico is an example for such add-on.
Ingress is useful for declaring what services can receive traffic from outside of the cluster.
Also, Service Meshes, like Istio is useful for further enhance this security. E.g. by using an Egress proxy, mTLS and require JWT based authentication between services.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm in the process of containerizing various .NET core API projects and running them in a Kubernetes cluster using Linux. I'm fairly new to this scenario (I usually use App Services with Windows) and a questions regarding best practices regarding secure connections are starting to come up:
Since these will run as pods inside the cluster my assumption is that I only need to expose port 80 correct? It's all internal traffic managed by the service and ingress. But is this a good practice? Will issues arise once I configure a domain with a certificate and secure traffic starts hitting the running pod?
When the time comes to integrate SSL will I have have to worry about opening up port 443 on the containers or managing any certificates within the container itself or will this all be managed by Ingress, Services (or Application Gateway since I am using AKS)? Right now when I need to test locally using HTTPS I have to add a self-signed certificate to the container and open port 443 and my assumption is this should not be in place for production!
When I do deploy into my cluster (I'm using AKS) with just port 80 open and I assign a LoadBalancer service I get a Public IP address. I'm used to using Azure App Services where you can use the global Miscrosoft SSL certificate right out of the box like so: https://your-app.azurewebsites.net However when I go to the Public IP and configure a DNS label for something like:
your-app.southcentralus.cloudapp.azure.com It does not allow me to use HTTPS like App Services does. Neither does the IP address. Maybe I don't have something configured properly with my Kubernetes instance?
Since many of these services are going to be public facing API endpoints (but consumed by a client application) they don't need to have a custom domain name as they won't be seen by the majority of the public. Is there a way to leverage secure connections with the IP address or the .cloudapp.azure.com domain? It would be cost/time prohibitive if I have to manage certificates for each of my services!
It depends on where you want to terminate your TLS. For most use cases, the ingress controller is a good place to terminate the TLS traffic and keep everything on HTTP inside the cluster. In that case, any HTTP port should work fine. If port 80 is exposed by Dotnet core by default then you should keep it.
You are opening port 443 locally because you don't have the ingress controller configured. You can install ingress locally as well. In production, you would not need to open any other ports beyond a single HTTP port as long as the ingress controller is handling the TLS traffic.
Ideally, you should not expose every service as Load Balancer. The services should be of type ClusterIP, only exposed inside the cluster. When you deploy an ingress controller, it will create a Load Balancer service. That will be the only entry point in the cluster. Ingress controller will then accept and route traffic to individual services by either hostname or paths.
Let's Encrypt is a free TLS certificate signing service that you can use for your setup. If you don't own the domain name, you can use https-01 challenge to verify your identity and get the certificate. Cert Manager project makes it easy to configure Let's Encrypt certificates in any k8s clusters.
https://cert-manager.io/docs/installation/kubernetes/
https://cert-manager.io/docs/tutorials/acme/ingress/ (Ignore the Tillter part if you have deployed it using kubectl or helm3)
Sidebar: If you are using Application Gateway to front your applications, consider using Application Gateway Ingress Controller
I used self signed openssl for APIs but when they are used client side it is showing the error message in secured response. How to provide original ssl cert? And I'm using elastic bean stalk in aws to host APIs. In that I have come across ACM and that is integrated with Elastic Load Balancing and Amazon CloudFront. So which one should I use from those two? If I use any of those two, will that be enough in production mode? Or should I use any other one?
You can setup a certificate with ACM that matches your DNS record. Then point that DNS record to your Elastic Beanstalk Environments DNS record. Which will be something like ENV-name.76p5XXXX22.us-east-1.elasticbeanstalk.com
AWS has a document you can follow here.
Let's begin.
For development purposes, self signed certificate is okay. You can set NODE_TLS_REJECT_UNAUTHORIZED=0 in environment variables.
For AWS Elastic Beanstalk behind Load Balancer, you can have 2 ways -
One way encrypted - In this you add a certificate in your load balancer only. This way, Client to Load balancer is encrypted and then load balancer to instances is unencrypted. This is safe. I use this. This way I don't have to use any certificates on my instances and I run a normal HTTP server on instances. You can choose to allow only HTTPS or not from load balancer settings.
End to end encrypted - In this you use a certificate on your instances as well and you can choose to forward encrypted traffic directly from Load Balancer to your instances or you can decrypt and re-encrypt traffic and send to instances. I don't have any experience with this. The first option is suitable for most cases. Refer to this: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html