See the connected question - Kubernetes pod exec API exception: Response must not include 'Sec-WebSocket-Protocol' header if not present in request.
I have been able to successfully make a WebSocket connection using Pod exec API. But I am using kubectl proxy on localhost to handle the authorization on behalf of the terminal client.
The next step is to be able to authorize the request directly to the kubernetes API server, so that there's no need to route the traffic via kubectl proxy. Here's a discussion in the python community where they have been able to send Authorization token to the api-server. But I haven't had any success with this in nodejs. I must admit that I am not familiar with python as well to understand the discussion enough.
Can someone from the kubernetes team point me in the right direction?
Thanks
For future wanderers....
Although the exec API supports Authorization header, the browser WebSocket API doesn't support it yet. So the solution for us was to reverse-proxy it from our server APIs.
It went like this...
client browser -wss-> GKE LB (SSL Termination) -ws-> site API (nodejs) -WSS & Authorization-> kube api-server exec API
So to answer my own question, per my tests, the GKE kubernetes supports Authorization only in headers, so you need to reverse proxy if you want to connect to it via browser. Per this code, some Kubernetes setups allow tokens in the query string, but I didn't have any success with GKE. If you are using a different cluster host, YMMY. I welcome comments from kubernetes team on my observations.
If you came here only for an authorization issue, you may stop reading further.
There are still more challenges to overcome though, and there's good news and bad news... the good news first:
GKE Loadbalancer automatically handles SSL termination even for WebSockets, so you can proxy to either WS or WSS without any issues.
And then the bad news:
GKE Loadbalancer force terminates ALL connections within 30 seconds, even if they are in use! There are workarounds, but they either don't stay put, require you to deploy your own controller, or you need to use Ingress. What this means for a Terminal sessions is that Chrome will close the client with a 1006 code, even if a command is running at that time.
For some WS scenarios, it may be acceptable to simply reconnect on a 1006 close, but for a terminal session, this is a deal-breaker as you cannot reconnect to the previous terminal instance and must begin with a new one.
For now we have resorted to increasing the timeout of the GKE Loadbalancer. But eventually we are planning to deploy our own Loadbalancer which can handle this better. Ingress has some issues which we don't want to live with at the moment.
Related
I have a bare-metal Kubernetes cluster setup running on separate lxc containers as nodes on Ubuntu 20.04 . It has Istio service mesh configured and approx 20 application services running on it (ServiceEntries for Istio are created for external services to be reached). I use MetalLB for the gateway's external IP provisioning.
I have an issue with pods making requests outside the cluster (egress), specifically reaching some of the external web services such as Cloudflare API or Sendgrid API to make some REST API calls. The DNS is working fine as the hosts I try to reach are indeed reachable from the pods (containers). It happens only that the first time pod is successful at making requests outside to the internet and after that random read ECONNRESET error happens when it tries to make REST API calls and sometimes even connect ETIMEDOUT but not frequent as the first error. Making the network requests from the nodes themselves to the internet seem to have no problems at all. Pods communicate with each other through the k8s' services fine without any of the problems.
I would guess something is not configured correctly and that the packets are not properly delivered back to the pod but I can't find any relevant help on the internet and I am a little bit lost on this one, I appreciate and I am very grateful for any of your help! I will happily provide more details if needed.
Thank you all once again!
We use Google App Engine and the provided load balancer to do SSL offloading for our API requests which are served by NodeJs. A third part is using Fortify to determine that even though it is https to the outside, because it is http inside the containers, it is considered a vulnerability.
Everything we read suggests setting the environment up this way.
Is this really a vulnerability and if so, how would we best mitigate against this without having to add paid certificates into our Node app.
Thanks in advance
Is this really a vulnerability and if so, how would we best mitigate
against this without having to add paid certificates into our Node
app.
Yes, the proxy of HTTPS to HTTP is a vulnerability as data is decrypted in transit. However, the connection between the frontend and your application is very hard to exploit outside the Google data center. I am not aware of a method to exploit this item.
In the cloud and on-premises data centers, proxying of HTTPS to HTTP is very popular. This offloads the CPU intensive process of encryption and decryption.
In security, there are almost always exceptions that need to be documented. This is one of them.
For the second part of your question, the proxy is HTTPS -> HTTP. This means that you cannot add your own SSL certificate to your backend code. If you did, you would have connection protocol errors.
If you must mitigate this problem, then you must select a different service and deploy your code with frontends/backends (web servers/proxies/load balancers) you configure and control.
I built a kubernetes cluster witch contain a ui app, worker, mongo, MySQL, elasticsearch and exposes 2 routs with ingress and there is also an ssl certificate on top of the cluster static ip. Utilizing pub/sub and storage.
All looks fine.
Now I’m looking for a secure way to expose
An endpoint to an external service
Use case:
A remote app wishes to access my cloud app with a video guid in the payload in a secure manner and get a url to a video in the bucket
I looked at google endpoints service but couldn’t get it to work with kubernetes.
There is more services that will need an access point to the app.
What is the best way for me to solve this problem.
Solve it by simply adding an endpoint to the ingress controlling the app, and protect it with SSL and JWT. Use this and this guides to add the ingress controller.
This tutorial shows how to integrate Kubernetes with Google Cloud Endpoint
I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.
I am looking into using a service fabric cluster for a service with a public API. When creating a service fabric cluster I have the ability to choose either secured mode and use a certificate, or use unsecured mode.
In unsecured mode, anyone can call the API which is what I want, however it also means that anyone can go to the management page at *northeurope.cloudapp.azure.com:19080 and do anything which is obviously not ok.
I tried using the secure mode with a certificate, and this prevents anyone without the certificate from using the management page, but also seems to prevent anyone calling the API.
Am I missing something simple? How do I keep the management side of the cluster secured, while making the API public so that anyone can call it?
Edit: After looking more carefully it seems to me that the intended behaviour is that as I've configured a custom endpoint when setting up the cluster that I should be able to call the service. So I believe it may just be an error in my code.
Securing a cluster has nothing to do with your application endpoints. There is a separation of concerns between securing the system (management endpoints, node authentication) and securing your applications (SSL, user auth, etc.). There is some other problem here, most likely you have configured the Azure Load Balancer to allow traffic into your cluster on the ports that your services are listening on. See here for more info on that: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/#service-fabric-in-azure