For my project, I need to send UDP packets to a Kubernetes Pod from an outside cluster. How can I do this?
I am using Kubeadm for creating the cluster. I tried to use nodePort but it seems that my requirement cannot be fulfilled with Nodeport.
Actually, NodePort can be used to expose ports within TCP and UDP protocols. What was the problem in your case?
You can consider using Nginx Ingress Controller and creating ReplicationController to implement Nginx ingress in order to expose your Pods across UDP port as described Here or you can check this Link.
Create ConfigMap and specify External port like <namespace/service name>:<service port> which you want to access from outside Kubernetes cluster.
Finally, Nginx ingress can be exposed, i.e., using Kubernetes ExternalIP.
I am able to find the solution for my requirement.
I have exposed the UDP port for my pod and it works fine.
Example
kubectl expose pod udp-server-deployment-8c8d6d868-c77zx --port=10001 --protocol=UDP --external-ip=10.1.11.82 --name=udp-server
Thank you all for your support :)
Related
So, I have a really simple Flask app that I'm deploying in a Kubernetes environment using helm. Now, I have the following defined in my values.yaml:
...
service:
type: ClusterIP
port: 5000
targetPort: 5000
# can add
# flaskPort: "5000"
ingress:
...
I know that I can set environment variables in my helm install command by typing helm install python-service . --values values-dev.yaml --set flaskPort=5000 and in my python code just do :
PORT = os.environ.get("flaskPort")
app.run(port=PORT, debug=True, host=0.0.0.0)
I can also define in my values-dev.yaml and in my templates/deployment.yaml entries for this environment variable flaskPort. But what about the port and targetPort entries in my values-dev.yaml? Wouldn't that clash with whatever flaskPort I set? How do I modify my chart to make sure that whatever port I specify in my helm install command, my python app is started on that port. The python app is a small mock server which responds to simple GET/POST commands.
Each Kubernetes pod has its own IP address inside the cluster, so you don't need to worry about port conflicts. Similarly, each service has its own IP address, distinct from the pod IP addresses, plus its own DNS name, so services can use the same ports as pods or other services without conflicts.
This means that none of this needs to be configurable at all:
Your application can listen on whatever port is the default for its framework; for Flask that is generally port 5000. (It does need to listen on the special "all interfaces" address 0.0.0.0.)
The pod spec should reflect the same (fixed) port number. It can help to give it a name.
ports:
- name: http
containerPort: 5000
The service can use any port it likes; for an HTTP-based service I'd recommend the default HTTP port 80. The targetPort: can be a name, which would match the name: of the corresponding pod/container port.
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
Calls to the service from within the cluster can use plain http://svcname.nsname/ URLs, without really caring how the service is implemented, what the pod IPs are, are what ports the pods happen to be using.
At a Helm level it can make sense to make details of the service configurable; in particular if it's a NodePort or LoadBalancer service (or neither) and any of the various cloud-provider-specific annotations. You don't need to configure the pod's port details, particularly if you've written both the application and the Helm chart. For example, if you run helm create, the template service that you get doesn't allow configuring the pod's port; it's fixed in the deployment spec and available to the service under the http name.
I am trying to implement a TCP/UDP gateway using kubernetes and I want to dynamically open and close a lot of ports.
Here is the detailed process:
We have a running container (containerA) that accepts incoming TCP connection on port 8080
We have a load balancer with ip 1.1.1.1, port 9091 is pointed to nginx ingress
Nginx Ingress will manage the connection between loadbalancer and containerA using TCP configmap
Loadbalancer 1.1.1.1:9091 -> nginx tcp stream 9091 -> backend containerA port 8080
When a new client comes, we will provision a new container (containerB) but with same port 8080
We will add a new port to the load balancer (port 9092)
Loadbalancer 1.1.1.1:9092 -> nginx tcp stream 9092 -> backend containerB port 8080
Repeat adding ports for new clients
The nginx ingress configmap for TCP connections looks like this:
apiVersion: v1
data:
"9091": default/php-apache1:8080
"9092": default/php-apache2:8080
"9093": default/php-apache3:8080
"9094": default/php-apache4:8080
kind: ConfigMap
Excerpt from Nginx ingress deployment yaml:
ports:
- containerPort: 9091
hostPort: 9091
name: 9091-tcp
protocol: TCP
- containerPort: 9092
hostPort: 9092
name: 9092-tcp
protocol: TCP
I was able to open specific TCP/UDP ports and everything works fine but right now I have 2 dilemmas:
Adding all the ports one by one on the yaml file is inefficient and hard to manage
Adding a new port (ex TCP/9091) by modifying the deployment yaml file causes the existing pods to restart. this behavior is undesirable when new ports are added every now and then
Based on my observation, when adding a new port to the nginx tcp configmap, the changes are reloaded successfully and ports are opened without needing a restart. The problem is, the ports are not yet routed properly unless you modify and add the port to the deployment yaml, which in turn causes the pod to restart.
My question is
Is it possible to add the routing rules only so that the nginx pod doesn't have to restart?
Is it possible to route all ports coming from the load balancer directly to NGINX ingress under Azure Kubernetes Service
Other suggestions for my use case
Unless I'm reading this wrong the question (essentially) is: is it possible to edit deployment without restarting the pod?
The answer is no. If you need to edit the deployment - it will restart the pods.
But I dont see where the problem is, they are not all being restarted at the same time. there should be no performance degradation
I am looking for a one stop solution to support multiple protocol request to my backend, such as MSMQ, HTTP, MQTT. Can I achieve this using Azure Kubernetes NGiNX ingress controller?
Nginx ingress only supports HTTP, TCP and UDP:
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/#exposing-tcp-and-udp-services
so if you treat those as tcp or udp (whatever they use, I'm not familiar with those) you can achieve that.
I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT
I want to expose multiple services trough single load balancer. Each service points to exactly one pod.
So far I tried to:
kubectl expose <podName> --port=7000
And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod.
So far I can connect to pod using external IP and specified port.
Depends on how you want to separate services on the same IP. The two ways that come to my mind are :
use NodePort services and then map some ports from your LB to that part on your cluster nodes. This gives separation by port.
way more interesting in my opinion is to use Ingress/IngressController. You would expose only IC on standard ports like 80 & 443 and then it will map to your services by hostname and uri
In Azure container service, Azure will use Load Balancer to expose k8s services, like this:
root#k8s-master-E27AE453-0:~# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jasonnginx 10.0.41.194 52.226.33.200 8080:32011/TCP 4m
kubernetes 10.0.0.1 <none> 443/TCP 11m
mynginx 10.0.144.49 40.71.230.60 80:32366/TCP 5m
yournginx 10.0.147.28 40.71.226.23 80:32289/TCP 4m
root#k8s-master-E27AE453-0:~#
Via Azure portal, check Azure load balancer frontend IP configuration(different IP address):
ACS will create Load Balancer rules and add rontend IP address automatically.
How to expose multiple kubernetes services trough single azure load
balancer?
ACS expose k8s services through that Azure Load Balancer, do you mean you want to expose k8s services with a single Public IP address?
If you want to expose k8s services with a single public IP address, as Radek said, maybe you should use Nginx Ingress Controller.
The Ingress Controller works like this:
Thanks guys. I think I have found viable solution to my problem. I should have been more specific about what I'm going to do.
I want to host game server over UDP. So any kubernetes ingress controller is not really an option, since they rarely support UDP routing.
I also don't need to host multitude of services on single machine 1-4 of pods per single node is probably the maximum.
I have found about using :
hostNetwork: true
in yaml config and it actually works pretty well for this scenario.
I get IP directly from host node. I can then select matching node within load balancer and create NAT or load balancing rule
Create multiple nodePort type service, and fix the nodePort.
And the cloud load balancer, set multiple listener groups. The listen port is same as the service's nodeport, and the target are all the worker nodes.