I am working on deploying an API solution on GCP where mutual SSL/TLS is required (server and client side certificates). So for the ingress of the traffic (entry point) I found that kubernetes ingress controller has this possibility (NGINX based). I am interested by cloud endpoints which has ESP (extensible service proxy which is also nginx deployment under kubernetes).
I couldn't find anywhere in the documentation whether mutual SSL/TLS is available for ESP (cloud endpoint), does anyone know the answer for this ?
This might be possible using Istio. Have you come across following article? which seems to suggest how to achieve MTLS for Endpoints.
https://istio.io/docs/examples/platform/endpoints/
ESP supports mTLS. You can specify the certificates files here
proxy_ssl_certificate /etc/nginx/ssl/backend.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/backend.key;
Here is its nginx config
Related
I have recently started learning and implementing istio in AWS EKS cluster. For configuring TLS for ingress gateway, I followed this guide which simply asks you to add AWS ACM ARN id to istio-ingressgateway as an annotation. So, I had to neither use certs to create secret nor use envoyproxy's SDS.
This setup terminates TLS at gateway, but I also want to enable mTLS within mesh for securing service-service communication. By following their documentation, I created this policy to enforce mTLS within a namespace:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: xyz-mtls-policy
namespace: xyz-dev
spec:
mtls:
mode: STRICT
But even after applying this, I see one service being able to call another service using http.
So my question is: how do I use the ACM certs to implement mTLS in my namespace?
If you're calling from inside the mesh I would say it's working fine, take a look here and here.
Mutual TLS in Istio
Istio offers mutual TLS as a solution for service-to-service authentication.
Istio uses the sidecar pattern, meaning that each application container has a sidecar Envoy proxy container running beside it in the same pod.
When a service receives or sends network traffic, the traffic always
goes through the Envoy proxies first.
When mTLS is enabled between two services, the client side and server side Envoy proxies verify each other’s identities before sending requests.
If the verification is successful, then the client-side proxy encrypts the traffic, and sends it to the server-side proxy.
The server-side proxy decrypts the traffic and forwards it locally to the actual destination service.
I am on istio 1.6.8, think it enables mTLS by default.
Yes, it's enabled by default since istio 1.5 version. There are related docs about this.
Automatic mutual TLS is now enabled by default. Traffic between sidecars is automatically configured as mutual TLS. You can disable this explicitly if you worry about the encryption overhead by adding the option -- set values.global.mtls.auto=false during install. For more details, refer to automatic mutual TLS.
Is there any clear process to prove that it is indeed using mTLS?
I would say there are 3 ways
Test with pods
You can change it from strict to permissive and call it from outside the mesh, it should work. Then change it to strict and call it again, it shouldn't work. In both ways you should be able to call it from a pod inside the mesh.
Kiali
If you want to see it visual way kiali should have something like a padlock when mtls is enabled, there is github issue about that.
Prometheus
It was already mentioned in the banzaicloud, and you mentioned that in the comments, you can check the Connection Security Policy metric label. Istio sets this label to mutual_tls if the request has actually been encrypted.
Let me know if have any more questions.
So I am looking at using Azure App Gateway to overcome a set of legacy servers (Win2003) that will not support TLS 1.2 and therefore come March+ 2020 the client browsers will not be able to access the site.
So my question is can I use AZ App Gateway to terminate the SSL and route traffic onto a set of Windows Load Balanced servers in our datacentres?
Has anyone done this before?
You can certainly do this, but Azure Front Door would be a better option, I believe (if you trust IP restrictions, I think that would be the only way to secure endpoints). They would allow you to offload SSL and offer some other nice features. And you don't have to create site-to-site vpn and maintain it.
What's the recommended way to setup SSL/TLS with AKS for a .NET Core website that uses SignalR?
From what I can tell Azure Front Door doesn't work because it doesn't support Websockets.
And AKS doesn't have a service like AWS does for doing SSL/TLS.
Do I really have to use a ngnix proxy on top to make this work?
Also, looks like same problem for gRPC in .NET Core and Azure. Basically no way to hose gRPC on Azure at all right now.
Suggestions?
Application Gateway provides native support for WebSocket across all gateway sizes. There is no user-configurable setting to selectively enable or disable WebSocket support. see Overview of WebSocket support in Application Gateway.
With the application gateway, you can create listeners on port 80/443 to support WebSocket traffic and health probe supports for HTTP and HTTPS protocols. APP GW also support SSL offload and end to end SSL traffic.
There are two options for App GW conjunction with AKS. One is simply to put the APP GW in front of the internal or public Loadbalancers with AKS, see this blog. Another better one currently is using Application Gateway Ingress Controller. This is supported by Application Gateway v2 only.
For more references:
Expose a WebSocket server
How to configure Azure Application Gateway work with AKS via SSL.
I have created a NodeJS application using http/2 following this example:
Note: this application uses self-signed certificate until now.
We deployed it on GKE, and it is working until now.
Here is how this simple architecture looks like:
Now, we want to start using real certificate, and don`t know where is the right place to put it.
Should we put it in pod (overriding self-signed certificate)?
Should we add a proxy on the top of this architecture to put the certificate in?
In GKE you can use a ingress object to routing external HTTP(S) traffic to your applications in your cluster. With this you have 3 options:
Google-managed certificates
Self-managed certificates shared with GCP
Self-managed certificates as Secret resources
Check this guide for the ingress load balancing
The Client's SSL session terminates at the LB level, the self-signed certificates being used are just to encrypt communication between the LB and the Pods. So if you want the client to use your new valid certificate it needs to be at the LB level.
On a side note, having your application servers communicate with the LoadBalancer over HTTP will give you a performance boost. Since the LB is acting as a reverse proxy anyway.
You can read this article about LoadBalancing it's written by the author of HAProxy
I built a kubernetes cluster witch contain a ui app, worker, mongo, MySQL, elasticsearch and exposes 2 routs with ingress and there is also an ssl certificate on top of the cluster static ip. Utilizing pub/sub and storage.
All looks fine.
Now I’m looking for a secure way to expose
An endpoint to an external service
Use case:
A remote app wishes to access my cloud app with a video guid in the payload in a secure manner and get a url to a video in the bucket
I looked at google endpoints service but couldn’t get it to work with kubernetes.
There is more services that will need an access point to the app.
What is the best way for me to solve this problem.
Solve it by simply adding an endpoint to the ingress controlling the app, and protect it with SSL and JWT. Use this and this guides to add the ingress controller.
This tutorial shows how to integrate Kubernetes with Google Cloud Endpoint