How to dynamically assign a port to a helm configuration? - python-3.x

So, I have a really simple Flask app that I'm deploying in a Kubernetes environment using helm. Now, I have the following defined in my values.yaml:
...
service:
type: ClusterIP
port: 5000
targetPort: 5000
# can add
# flaskPort: "5000"
ingress:
...
I know that I can set environment variables in my helm install command by typing helm install python-service . --values values-dev.yaml --set flaskPort=5000 and in my python code just do :
PORT = os.environ.get("flaskPort")
app.run(port=PORT, debug=True, host=0.0.0.0)
I can also define in my values-dev.yaml and in my templates/deployment.yaml entries for this environment variable flaskPort. But what about the port and targetPort entries in my values-dev.yaml? Wouldn't that clash with whatever flaskPort I set? How do I modify my chart to make sure that whatever port I specify in my helm install command, my python app is started on that port. The python app is a small mock server which responds to simple GET/POST commands.

Each Kubernetes pod has its own IP address inside the cluster, so you don't need to worry about port conflicts. Similarly, each service has its own IP address, distinct from the pod IP addresses, plus its own DNS name, so services can use the same ports as pods or other services without conflicts.
This means that none of this needs to be configurable at all:
Your application can listen on whatever port is the default for its framework; for Flask that is generally port 5000. (It does need to listen on the special "all interfaces" address 0.0.0.0.)
The pod spec should reflect the same (fixed) port number. It can help to give it a name.
ports:
- name: http
containerPort: 5000
The service can use any port it likes; for an HTTP-based service I'd recommend the default HTTP port 80. The targetPort: can be a name, which would match the name: of the corresponding pod/container port.
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
Calls to the service from within the cluster can use plain http://svcname.nsname/ URLs, without really caring how the service is implemented, what the pod IPs are, are what ports the pods happen to be using.
At a Helm level it can make sense to make details of the service configurable; in particular if it's a NodePort or LoadBalancer service (or neither) and any of the various cloud-provider-specific annotations. You don't need to configure the pod's port details, particularly if you've written both the application and the Helm chart. For example, if you run helm create, the template service that you get doesn't allow configuring the pod's port; it's fixed in the deployment spec and available to the service under the http name.

Related

accessing the minikube pods externally apart from local machine

I started the minikube process with docker driver, but I am accessing the data on my local machine only. I want to provide that url to client.
can any one help me regarding this issue. is it possible to access the minikube service externally on other machines apart from the local machine ?
my service file is as follows:
{
apiVersion: v1
kind: Service
metadata:
name: xxxx
spec:
selector:
app: xxxx
ports:
- port: 8080
targetPort: xxxx
type: LoadBalancer
}
Thank you
Important: minikube is not meant to be used in production. It's mainly an educational tool, used to teach user how kubernetes work in safe, controlled (and usually local) environment. Please, do not use it in production environments.
Important #2: Under any circumstances do not give access to your local machine to anyone - unless it's a server meant to be accessible from outside organization, and correctly hardened - be it your client or your friend. This is a huge security risk.
Now, off to the question:
Running:
minikube service --url <service name>
will give you an url with external IP, probably something in 192.168.0.0/16 range (if you are on local network). Then you need to create port forwarding rule on your router.
You can find more details here.

Forward all TCP and UDP ports from load balancer to nginx ingress on Azure Kubernetes Service

I am trying to implement a TCP/UDP gateway using kubernetes and I want to dynamically open and close a lot of ports.
Here is the detailed process:
We have a running container (containerA) that accepts incoming TCP connection on port 8080
We have a load balancer with ip 1.1.1.1, port 9091 is pointed to nginx ingress
Nginx Ingress will manage the connection between loadbalancer and containerA using TCP configmap
Loadbalancer 1.1.1.1:9091 -> nginx tcp stream 9091 -> backend containerA port 8080
When a new client comes, we will provision a new container (containerB) but with same port 8080
We will add a new port to the load balancer (port 9092)
Loadbalancer 1.1.1.1:9092 -> nginx tcp stream 9092 -> backend containerB port 8080
Repeat adding ports for new clients
The nginx ingress configmap for TCP connections looks like this:
apiVersion: v1
data:
"9091": default/php-apache1:8080
"9092": default/php-apache2:8080
"9093": default/php-apache3:8080
"9094": default/php-apache4:8080
kind: ConfigMap
Excerpt from Nginx ingress deployment yaml:
ports:
- containerPort: 9091
hostPort: 9091
name: 9091-tcp
protocol: TCP
- containerPort: 9092
hostPort: 9092
name: 9092-tcp
protocol: TCP
I was able to open specific TCP/UDP ports and everything works fine but right now I have 2 dilemmas:
Adding all the ports one by one on the yaml file is inefficient and hard to manage
Adding a new port (ex TCP/9091) by modifying the deployment yaml file causes the existing pods to restart. this behavior is undesirable when new ports are added every now and then
Based on my observation, when adding a new port to the nginx tcp configmap, the changes are reloaded successfully and ports are opened without needing a restart. The problem is, the ports are not yet routed properly unless you modify and add the port to the deployment yaml, which in turn causes the pod to restart.
My question is
Is it possible to add the routing rules only so that the nginx pod doesn't have to restart?
Is it possible to route all ports coming from the load balancer directly to NGINX ingress under Azure Kubernetes Service
Other suggestions for my use case
Unless I'm reading this wrong the question (essentially) is: is it possible to edit deployment without restarting the pod?
The answer is no. If you need to edit the deployment - it will restart the pods.
But I dont see where the problem is, they are not all being restarted at the same time. there should be no performance degradation

Https certificates and Kubernetes (EKS)

I would like to secure my web application running on Kubernetes (EKS). All the nodes attached to the cluster are running on private subnets.
I have one front-end service and a dozen back-end services.
The front-end service is a pod running a container which is running on port 80. It is configured to be attached to an ELB which is only accepting traffic from 443 with an https certificate.
apiVersion: v1
kind: Service
metadata:
name: service_name
labels:
app: service_name
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
ports:
- port: 443 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
type: LoadBalancer
The back-end services are pods running containers also running on port 80. None of them have been configured to be accessible from outside the cluster. Back-end services talk to each other by pointing to http://service_name (NOT https) as I configured them with this template:
apiVersion: v1
kind: Service
metadata:
name: service_name
spec:
ports:
- port: 80 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
It all works but is it sufficient?
Should the front-end/back-end containers use certificate/443 too with a wildcard https certificate? Should this configuration be done inside the container or on the services' configurations?
I have done quite a bit of investigation now and here is what I came down to.
All my EKS EC2 instances are running on the private subnets which means they are not accessible from outside. Yes, by default Kubernetes does not encrypt traffic between pods which means that a hacker who gained access to my VPC (could be a rogue AWS engineer, someone who manages to physically access AWS data centers, someone who managed to access my AWS account...) will be able to sniff the network traffic. At the same time, I feel that in that instance the hacker will have access to much more! If he has access to my AWS account, he can download the https certificate himself for instance. If he manages to walk into an (high security) AWS data center and finds my server - it's good to compare the risk he has to take against the value of your data. If your data includes credit card/payments or sensitive personal data (date of birth, health details...), SSL encryption is a must.
Anyway, to secure pods traffic, there are 2 options.
Update all the pod source code and add the certificate there. It requires a lot of maintenance if you are running many pods and certificates expire every other year..
Add an extra 'network layer' like https://istio.io/. This will add complexity to your cluster and in the case of EKS, support from AWS will be 'best effort'. Ideally, you would pay for Istio support.
For the load balancer, I decided to add an ingress to the cluster (Ngnix, Traefik...) and set it up with https. That's critical as the ELB sits on the public subnets.

UDP send and receive in kubernetes

For my project, I need to send UDP packets to a Kubernetes Pod from an outside cluster. How can I do this?
I am using Kubeadm for creating the cluster. I tried to use nodePort but it seems that my requirement cannot be fulfilled with Nodeport.
Actually, NodePort can be used to expose ports within TCP and UDP protocols. What was the problem in your case?
You can consider using Nginx Ingress Controller and creating ReplicationController to implement Nginx ingress in order to expose your Pods across UDP port as described Here or you can check this Link.
Create ConfigMap and specify External port like <namespace/service name>:<service port> which you want to access from outside Kubernetes cluster.
Finally, Nginx ingress can be exposed, i.e., using Kubernetes ExternalIP.
I am able to find the solution for my requirement.
I have exposed the UDP port for my pod and it works fine.
Example
kubectl expose pod udp-server-deployment-8c8d6d868-c77zx --port=10001 --protocol=UDP --external-ip=10.1.11.82 --name=udp-server
Thank you all for your support :)

Random characters when describing kubernetes namespaces

I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT

Resources