Pods do not resolve the domain names of a service through ingress - dns

I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?

So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.

Related

AKS Nginx controller tcp port

I have 1 AKS clusters with multiple services and 1 ingress controller
I have a requirement for 1 of the service to listen to tcp on port 11112
Below is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: xxx-py
name: xxx-py
spec:
replicas: 2
selector:
matchLabels:
app: xxx-py
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: xxx-py
spec:
containers:
- image: prodacr01.azurecr.io/xxxpy:#{Build.BuildId}#
name: xxx-py
imagePullPolicy: Always
resources: {}
ports:
- containerPort: 8000
- containerPort: 11112
imagePullSecrets:
- name: regcred
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: xxx-py
labels:
run: xxx-py
spec:
ports:
- name: httpapp
protocol: TCP
port: 80
targetPort: 8000
- name: dapp
protocol: TCP
port: 11112
targetPort: 11112
selector:
app: xxx-py
What changes are required to make it accessible on 11112?
I have created the different deployments to expose the service to port 11112
I have created the sample nginx deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: APPXXX
department: XXXX
replicas: 3
template:
metadata:
labels:
app: appXXX
department: XXXX
spec:
containers:
- name: hello
image: "IMAGEXXX
Created the deployment using below command
Kubectl apply -f filename.yaml
To check the deployment pod kubectl get pods
I have created the manifest file for service type of clusterIP
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
spec:
type: ClusterIP #Type of port(NODE/load balancer/cluster)
selector:
app: APPXXXX
department: XXXX
ports:
- protocol: TCP
port: 8000
targetPort: 11112
To deploy the file and check the services which we have created use the below commands
kubectl apply -f
kubectl get svc
To view the service kubectl get service my-np-service --output yaml
If the nodes in the cluster have an external IP address use the below command
kubectl get nodes --output wide
We can check with the Loadbalancer/cluster/nodeport IP weather servoce are exposed to 11112 port ot not
IPaddress:11112
We can also use the below command to expose the service to 11112 port
kubectl expose deployment deployment_name --name service_name \
--type Cluster_IP --protocol TCP --port 80 --target-port 8080

Can't access an application deployed on AKS

I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation

How to get a external IP of VM running Kubernet Services

I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

NodeJS Service is not reachable on Kubernete (GCP)

I have a true roadblock here and I have not found any solutions so far. Ultimately, my deployed NodeJS + Express server is not reachable when deploying to a Kubernete cluster on GCP. I followed the guide & example, nothing seems to work.
The cluster, node and service are running just fine and don't have any issues. Furthermore, it works just fine locally when running it with Docker.
Here's my Node YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-08-06T04:13:29Z
generation: 1
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "23861"
selfLink: /apis/apps/v1/namespaces/default/deployments/nodejsapp
uid: 8b6b7ac5-b800-11e9-816e-42010a9600de
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nodejsapp
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nodejsapp
spec:
containers:
- image: gcr.io/${project}/nodejsapp:latest
imagePullPolicy: Always
name: nodejsapp
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-08-06T04:13:29Z
lastUpdateTime: 2019-08-06T04:13:29Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Service YAML:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-08-06T04:13:34Z
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "25444"
selfLink: /api/v1/namespaces/default/services/nodejsapp
uid: 8ef81536-b800-11e9-816e-42010a9600de
spec:
clusterIP: XXX.XXX.XXX.XXX
externalTrafficPolicy: Cluster
ports:
- nodePort: 32393
port: 80
protocol: TCP
targetPort: 5000
selector:
run: nodejsapp
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX.XXX
The NodeJS server is configured to run on Port 5000. I tried doing no port-forwarding as well but not a difference in the result.
Any help is much appreciated.
UPDATE:
I used this guide and followed the instructions: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
UPDATE 2:
FINALLY - figured it out. I'm not sure why this is not mentioned anywhere but you have to create an Ingress that routes the traffic to the pod accordingly.
Here's the example config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-32064--abfe1f07378017e9":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/target-proxy: k8s-tp-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/url-map: k8s-um-default-nodejsapp--abfe1f07378017e9
creationTimestamp: 2019-08-06T18:59:15Z
generation: 1
name: nodejsapp
namespace: default
resourceVersion: "171168"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/versapay-api
uid: 491cd248-b87c-11e9-816e-42010a9600de
spec:
backend:
serviceName: nodejsapp
servicePort: 80
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX
Adding it as an answer as need to include image (But not necessarily an answer):
As shown in the image, besides your backend service, a green tick should be visible
Probable Solution:
In your NodeJsApp, please add the following base URL .i.e.,
When the application is started locally, http://localhost:5000/ should return a 200 status code (With ideally Server is running... or some message)
And also, if path based routing is enabled, another base URL is also required:
http://localhost:5000/<nodeJsAppUrl>/ should also return 200 status code.
Above URLs are required for health check of both LoadBalancer and Backend Service and redeploy the service.
Please let me know if the above solution doesn't fix the said issue.
You need an intermediate service to internally expose your deployment.
Right now, you have a set of pods grouped in a deployment and a load balancer exposed in your cluster but you need to link them with an additional service.
You can try using a NodePort like the following:
apiVersion: v1
kind: Service
metadata:
name: nodejsapp-nodeport
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 32393
targetPort: 5000
type: NodePort
This NodePort service is in between your Load Balancer and the pods in your deployment, targeting them in port 5000 and exposing port 32393 (as per your settings in the original question, you can change it).
From here, you can redeploy your Load Balancer to target the previous NodePort. This way, you can reach your NodeJS app via port 80 from your load balancer public address.
apiVersion: v1
kind: Service
metadata:
name: nodejs-lb
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 80
targetPort: 32393
type: LoadBalancer
The whole scenario would look like this:
publicy exposed address --> LoadBalancer --> | NodePort --> Deployment --> Pods

Cannot access application deployed in Azure ACS Kubernetes Cluster using Azure CICD Pipeline

I am following this document.
https://github.com/Azure/DevOps-For-AI-Apps/blob/master/Tutorial.md
The CICD pipeline works fine. But I want to validate the application using the external ip that is being deployed to Kubernete cluster.
Deploy.yaml
apiVersion: v1
kind: Pod
metadata:
name: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:156
ports:
- containerPort: 88
imagePullSecrets:
- name: imageclassificationappdemosecret
Pod details
C:\Users\nareshkumar_h>kubectl describe pod imageclassificationapp
Name: imageclassificationapp
Namespace: default
Node: aks-nodepool1-97378755-2/10.240.0.5
Start Time: Mon, 05 Nov 2018 17:10:34 +0530
Labels: new-label=imageclassification-label
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"imageclassificationapp","namespace":"default"},"spec":{"containers":[{"image":"crr...
Status: Running
IP: 10.244.1.87
Containers:
model-api:
Container ID: docker://db8687866d25eb4311175c5ccb5a7205379168c64cdfe716b09557fc98e2bd6a
Image: crrq51278013.azurecr.io/model-api:156
Image ID: docker-pullable://crrq51278013.azurecr.io/model-api#sha256:766673989a59fe0b1e849469f38acda96853a1d84e4b4d64ffe07810dd5d04e9
Port: 88/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 05 Nov 2018 17:12:49 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qhdjr (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qhdjr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qhdjr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Service details:
C:\Users\nareshkumar_h>kubectl describe service imageclassification-service
Name: imageclassification-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.0.24.9
LoadBalancer Ingress: 52.163.191.28
Port: <unset> 88/TCP
TargetPort: 88/TCP
NodePort: <unset> 32672/TCP
Endpoints: 10.244.1.65:88,10.244.1.88:88,10.244.2.119:88
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I am hitting the below url but the request times out.
http://52.163.191.28:88/
Can some one please help? Please let me know if you need any further details.
For your issue, I did a test and it worked in my side. The yaml file here:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 88
targetPort: 80
selector:
app: nginx
And there are some points should pay attention to.
You should make sure which port the service listen to in the container. For example, in my test, the nginx service listens to port 80 default.
The port that you want to expose in the node should be idle. In other words, the port was not used by other services.
When all the steps have done. You can access the public IP with the port you have exposed in the node.
The screenshots show the result of my test:
Hope this will help you!
We are able to solve this issue after reconfiguring Kubernetes Service with Right configuration and changing deploy.yaml file as follows -
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: imageclassificationapp
spec:
selector:
matchLabels:
app: imageclassificationapp
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:205
ports:
- containerPort: 88
---
apiVersion: v1
kind: Service
metadata:
name: imageclassificationapp
spec:
type: LoadBalancer
ports:
- port: 85
targetPort: 88
selector:
app: imageclassificationapp
We can close this thread now.

Resources