I have a mysql pod in my cluster that I want to expose to a public IP. Therefor I changed it to be a loadbalancer by doing
kubectl edit svc mysql-mysql --namespace mysql
release: mysql
name: mysql-mysql
namespace: mysql
resourceVersion: "646616"
selfLink: /api/v1/namespaces/mysql/services/mysql-mysql
uid: cd1cce11-890c-11e8-90f5-869c0c4ba0b5
spec:
clusterIP: 10.0.117.54
externalTrafficPolicy: Cluster
ports:
- name: mysql
nodePort: 31479
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql-mysql
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 137.117.40.121
changing ClusterIP to LoadBalancer.
However I can't seem to reach it by going to mysql -h137.117.40.121 -uroot -p*****
Anyone have any idea? Is it because i'm trying to forward it over TCP?
For your issue, you want to expose your mysql pod to a public IP. So you need to take a look at Ingress in Kubernets. It's an API object that manages external access to the services in a cluster, typically HTTP. For the Ingress, you need both ingress controller and ingress rules. For more details, you can read the document I posted.
In Azure, you can get more details from HTTPS Ingress on Azure Kubernetes Service (AKS).
As pointed out by #aurelius, your config seems correct it's possible that the traffic is getting blocked by your firewall rules.
Also make sure, the cloud provider option is enabled for your cluster.
kubectl get svc -o wide would show the status of the LoadBalancer and the IP address allocated.
#charles-xu-msft, using Ingress is definitely an option but there is nothing wrong in using LoadBalancer kind of Service when the cloud provider is enabled for the kubernetes cluster.
Just for reference, here is test config:
apiVersion: v1
kind: Pod
metadata:
name: mysql-pod
labels:
name: mysql-pod
spec:
containers:
- name: mysql:5
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpassword
---
apiVersion: v1
kind: Service
metadata:
name: test-mysql-lb
spec:
type: LoadBalancer
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
name: mysql-pod
Related
Setup:
Azure Kubernetes Service
Azure Application Gateway
We have kubernetes cluster in Azure which uses Application Gateway for managing network trafic. We are using appgw over Load Balancer because we need to handle trafic at layer 7, hence path-based http rules. We use kubernetes ingress controller for configuring appgw. See config below.
Now I want a service that both accept requests on HTTP (layer 7) and TCP (layer 4).
How do I do that? The exposed port should not be public on the big internet, but public on the azure network. Do I need to add another Ingress Controller that is not configured to use appgw?
This is what I want to accomplish:
This is the config for the ingress controller using appgw:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service1
labels:
app: service1
annotations:
appgw.ingress.kubernetes.io/backend-path-prefix: /
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /service1/*
backend:
serviceName: service1
servicePort: http
Current setup:
Generally you have two straightforward options.
Use direct pod IP or Headless ClusterIP Service.
I assume your AKS cluster employs either Azure CNI or Calico as a networking fabric.
In both cases your Pods get routable ips in AKS subnet.
https://learn.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking
Thus, you can make them accessible directly across your VNet.
Alternatively, you could use Service of type Internal Load Balancer.
You can build Internal LB thru appropriate annotations.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
When you view the service details, the IP address of the internal load balancer is shown in the EXTERNAL-IP column. In this context, External is in relation to the external interface of the load balancer, not that it receives a public, external IP address.
https://learn.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer
If needed, you can assign a predefined IP address to your LB and / or put it into a different subnet across your VNet or even into a private subnet, use VNet peering etc.
Eventually you can make it routable from wherever you need.
The exposed port should not be public, but public in the kubernetes cluster.
I assume that you mean that your application should expose a port for clients within the Kubernetes cluster. You don't have to do any special in Kubernetes for Pods to do this, they can accept TCP connection to any port. But you may want to create a Service of type: ClusterIP for this, so it will be easer for clients.
Nothing more than that should be needed.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Here is an example snippet where I have created a service
apiVersion: v1
kind: Service
metadata:
name: cafe-app-service
labels:
application: cafe-app-service
spec:
ports:
port: 80
protocol: HTTP
targetPort: 8080
name: coffee-port
port: 8081
protocol: TCP
targetPort: 8081
name: tea-port
selector:
application: cafe-app
You can reference your service in your ingress that you have created
I went with having two Services for same pod; one for the HTTP handled by an Ingress (appgw) and one for TCP using an internal Azure Load balancer.
This is my config that I ended up using:
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-serviceA-cm
namespace: default
data:
8200: "default/serviceA:8200"
TCP service
apiVersion: v1
kind: Service
metadata:
name: tcp-serviceA
namespace: default
labels:
app.kubernetes.io/name: serviceA
app.kubernetes.io/part-of: serviceA
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: subnetA
spec:
type: LoadBalancer
ports:
- name: tcp
port: 8200
targetPort: 8200
protocol: TCP
selector:
app: serviceA
release: serviceA
HTTP Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: serviceA
labels:
app: serviceA
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /serviceA/*
backend:
serviceName: serviceA
servicePort: http
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
replicas: 1
selector:
matchLabels:
app: serviceA
release: serviceA
template:
metadata:
labels:
app: serviceA
release: serviceA
spec:
containers:
- name: serviceA
image: "serviceA:latest"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
HTTP Service
apiVersion: v1
kind: Service
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: serviceA
release: serviceA
I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.
Following on from my question here, I now have the issue that I am unable to connect to the external endpoint. My YAML file is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
labels:
app: app-label
---
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: app-label
type: LoadBalancer
ports:
- protocol: TCP
port: 443
I can now see an external IP when I issue the command:
kubectl get service test-service --watch
However, if I try to connect to that IP I get a timeout exception. I've tried running the dashboard, and it says everything is running fine. What I can do next to diagnose this issue?
in this case, the problem was solved by exposing container on port 80 and routing from external port 6666 to it.
I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com
As an experiment I'm trying to run a docker container on Azure using the Azure Container Service and Kubernetes as the orchestrator. I'm running the official nginx image. Here are the steps I am taking:
az group create --name test-group --location westus
az acs create --orchestrator-type=kubernetes --resource-group=test-group --name=k8s-cluster --generate-ssh-keys
I created Kubernetes deployment and service files from a docker compose file using Kompose.
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: test
spec:
containers:
- image: nginx:latest
name: test
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
service file
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: test
type: LoadBalancer
status:
loadBalancer: {}
I can then start everything up:
kubectl create -f test-service.yaml,test-deployment.yaml
Once an IP has been exposed I assign a dns prefix to it so I can access my running container like so: http://nginx-test.westus.cloudapp.azure.com/.
My question is, how can I access the service using https? At https://nginx-test.westus.cloudapp.azure.com/
I don't think I'm supposed to configure nginx for https, since the certificate is not mine. I've tried changing the load balancer to send 443 traffic to port 80, but I receive a timeout error.
I tried mapping port 443 to port 80 in my Kubernetes service config.
ports:
- name: "443"
port: 443
targetPort: 80
But that results in:
SSL peer was not expecting a handshake message it received. Error code: SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT
How can I view my running container at https://nginx-test.westus.cloudapp.azure.com/?
If I understand it correctly, I think you are looking for Nginx Ingress controller.
If we need TLS termination on Kubernetes, we can use ingress controller, on Azure we can use Nginx Ingress controller.
To archive this, we can follow those steps:
1 Deploy the Nginx Ingress controller
2 Create TLS certificates
3 Deploy test http service
4 configure TLS termination
More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this blog.
root#k8s-master-6F403744-0:~/ingress/examples/deployment/nginx# kubectl get services --namespace kube-system -w
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.0.113.185 <none> 80/TCP 42m
heapster 10.0.4.232 <none> 80/TCP 1h
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
kubernetes-dashboard 10.0.237.125 <nodes> 80:32229/TCP 1h
nginx-ingress-ssl 10.0.92.57 40.71.37.243 443:30215/TCP 13m