I have install kubeadm in master and connected 2 worker node to it, after that i deploy nginx pod and ssh into that nginx pod after that i type nslookup google.com or apt update its not working got connection timeout it think due its not connecting to internet. How to solve it, The 3 VM is running in azure portal and 3 VM are connected to together. kubectl v1.24.2 im using it
The 3 VM is running in azure portal and 3 VM are connected to together. kubectl v1.24.2 im using it and calico network also im using
nginx pod is running in worker2 and services of containerd container runtime/Docker Application Container Engine both are in running state. if i type lsmod | grep br_netfilter i got
br_netfilter 28672 0
bridge 266240 1 br_netfilter
here is my nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# command: ["/bin/sh","-c"]
# args: ["apt update"]
# securityContext:
# privileged: true
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: ClusterIP
selector:
app: nginx
ports:
- port: 8080
targetPort: 80
name: nginx-http
And Here is the screenshot of coredns install in kube-system naming space
Seems like name is not resolving, please check if coredns is working properly. You can break down the issue like this:
from inside the pod first check you have network reachability to the internet: curl -v telnet://8.8.8.8:53
check you have connectivity to core DNS: curl -v telnet://coredns_service_name:53 or curl -v telnet://coredns_cluster_ip:53
Related
I am trying to create a multicontainer pod for a simple demo. I have an app that is build in docker containers. There are 3 containers
1 - redis server
1 - node/express microservice
2 - node/express/react front end
All 3 containers are deployed successfully and running.
I have created a public load balancer, which is running without any errors.
I cannot connect to the front end from the public ip.
I have also run tcpdump in the frontend container and there is no traffic getting in.
Here is my yaml file used to create the deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydemoapp
spec:
replicas: 1
selector:
matchLabels:
app: mydemoapp
template:
metadata:
labels:
app: mydemoapp
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: microservices-web
image: mydemocr.azurecr.io/microservices_web:v1
ports:
- containerPort: 3001
- name: redislabs-rejson
image: mydemocr.azurecr.io/redislabs-rejson:v1
ports:
- containerPort: 6379
- name: mydemoappwebtest
image: mydemocr.azurecr.io/jsonformwebtest:v1
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: mydemoappservice
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
selector:
app: mydemoapp
This is what a describe of my service looks like :
Name: mydemoappservice
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mydemoappservice","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=mydemoapp
Type: LoadBalancer
IP: 10.0.104.159
LoadBalancer Ingress: 20.49.172.10
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 31990/TCP
Endpoints: 10.244.0.17:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 24m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 24m service-controller Ensured load balancer
One more weirdness is that when I run the docker container from the front end I can get a shell and run curl localhost:3000 and get some output but when I do it in the az container I get the following response after some delay
curl: (52) Empty reply from server
As to why this container works on my machine and not in azure is another layer to the mystery.
Referring from docs here the container need to listen on 0.0.0.0 instead of 127.0.0.1 because
any port which is listening on the default 0.0.0.0 address inside a
container will be accessible from the network
.
I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.
I've deployed a pod in AKS and I'm trying to connect to it via an external load balancer.
The items I done for troubleshooting are:
Verified (using kubectl) pod deployed in k8s and is running properly.
Verified (using netstat) Network port 80 is in ‘listening’. I logged into the pod using 'kubectl exec'
The .yaml file I used to deploy is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: qubo
namespace: qubo-gpu
spec:
replicas: 1
selector:
matchLabels:
app: qubo
template:
metadata:
labels:
app: qubo
spec:
containers:
- name: qubo-ctr
image: <Blanked out>
resources:
limits:
nvidia.com/gpu: 1
command: ["/app/xqx"]
args: ["80"]
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: api
namespace: qubo-gpu
annotations:
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: qubo
Turned out to be my bug in the code of how I opened the socket. In hopes this will help someone else, this is how I went about troubleshooting:
Got IP for pod:
kubectl get pods -o wide
Created a new ubuntu pod in cluster:
kubectl run -it --rm --restart=Never --image=ubuntu:18.04 ubuntu bash
Downloaded curl to new pod:
apt-get update && apt-get install -y curl
Tried to curl to the pod IP (from step 1):
curl -v -m5 http://<Pod IP>:80
Step 4 failed for me, however, I was able to run the docker container successfully on my machine and connect. Issue was that I opened the connection as localhost instead of 0.0.0.0.
I am attempting to create a service for creating training datasets using the Prodigy UI tool. I would like to do this using a Kubernetes cluster which is running in Azure cloud. My Prodigy UI should be reachable on 0.0.0.0:8880 (on the container).
As such, I created a deployment as follows:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: prodigy-dply
spec:
replicas: 1
selector:
matchLabels:
app: prodigy_pod
template:
metadata:
labels:
app: prodigy_pod
spec:
containers:
- name: prodigy-sentiment
image: bdsdev.azurecr.io/prodigy
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "prodigy spacy textapi -F training_recipe.py"]
ports:
- name: prodigyport
containerPort: 8880
This should (should being the operative word here) expose that 8880 port at the pod level aliased as prodigyport
Following that, I have created a Service as below:
kind: Service
apiVersion: v1
metadata:
name: prodigy-service
spec:
type: LoadBalancer
selector:
app: prodigy_pod
ports:
- protocol: TCP
port: 8000
targetPort: prodigyport
At this point, when I run the associated kubectl create -f <deployment>.yaml and kubectl create -f <service>.yaml, I get an ExternalIP and associated Port: 10.*.*.*:34672.
This is not reachable by browser, and I'm assuming I have a misunderstanding of how my browser would interact with this Service, Pod, and the underlying Container. What am I missing here?
Note: I am willing to accept that kubernetes may not be the tool for the job here, it seems enticing because of the ease of scalability and updating images to reflect more recent configurations
You can find public IP address(LoadBalancer Ingress) with this command:
kubectl get service azure-vote-front
Result like this:
root#k8s-master-79E9CFFD-0:~# kubectl get service azure
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure 10.0.136.182 52.224.219.190 8080:31419/TCP 10m
Then you can browse it with external IP and port, like this:
curl 52.224.219.190:8080
Also you can find the Load Balaner rules via Azure portal:
Hope this helps.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services prodigy-service
The IP address is listed next to LoadBalancer Ingress.
Also, you can use port forwarding to access your pod:
kubectl port-forward <pod_name> 8880:8880
After that you can access Prodigy UI by localhost:8880 in your browser.
As an experiment I'm trying to run a docker container on Azure using the Azure Container Service and Kubernetes as the orchestrator. I'm running the official nginx image. Here are the steps I am taking:
az group create --name test-group --location westus
az acs create --orchestrator-type=kubernetes --resource-group=test-group --name=k8s-cluster --generate-ssh-keys
I created Kubernetes deployment and service files from a docker compose file using Kompose.
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: test
spec:
containers:
- image: nginx:latest
name: test
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
service file
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: test
type: LoadBalancer
status:
loadBalancer: {}
I can then start everything up:
kubectl create -f test-service.yaml,test-deployment.yaml
Once an IP has been exposed I assign a dns prefix to it so I can access my running container like so: http://nginx-test.westus.cloudapp.azure.com/.
My question is, how can I access the service using https? At https://nginx-test.westus.cloudapp.azure.com/
I don't think I'm supposed to configure nginx for https, since the certificate is not mine. I've tried changing the load balancer to send 443 traffic to port 80, but I receive a timeout error.
I tried mapping port 443 to port 80 in my Kubernetes service config.
ports:
- name: "443"
port: 443
targetPort: 80
But that results in:
SSL peer was not expecting a handshake message it received. Error code: SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT
How can I view my running container at https://nginx-test.westus.cloudapp.azure.com/?
If I understand it correctly, I think you are looking for Nginx Ingress controller.
If we need TLS termination on Kubernetes, we can use ingress controller, on Azure we can use Nginx Ingress controller.
To archive this, we can follow those steps:
1 Deploy the Nginx Ingress controller
2 Create TLS certificates
3 Deploy test http service
4 configure TLS termination
More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this blog.
root#k8s-master-6F403744-0:~/ingress/examples/deployment/nginx# kubectl get services --namespace kube-system -w
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.0.113.185 <none> 80/TCP 42m
heapster 10.0.4.232 <none> 80/TCP 1h
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
kubernetes-dashboard 10.0.237.125 <nodes> 80:32229/TCP 1h
nginx-ingress-ssl 10.0.92.57 40.71.37.243 443:30215/TCP 13m