I have 1 AKS clusters with multiple services and 1 ingress controller
I have a requirement for 1 of the service to listen to tcp on port 11112
Below is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: xxx-py
name: xxx-py
spec:
replicas: 2
selector:
matchLabels:
app: xxx-py
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: xxx-py
spec:
containers:
- image: prodacr01.azurecr.io/xxxpy:#{Build.BuildId}#
name: xxx-py
imagePullPolicy: Always
resources: {}
ports:
- containerPort: 8000
- containerPort: 11112
imagePullSecrets:
- name: regcred
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: xxx-py
labels:
run: xxx-py
spec:
ports:
- name: httpapp
protocol: TCP
port: 80
targetPort: 8000
- name: dapp
protocol: TCP
port: 11112
targetPort: 11112
selector:
app: xxx-py
What changes are required to make it accessible on 11112?
I have created the different deployments to expose the service to port 11112
I have created the sample nginx deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: APPXXX
department: XXXX
replicas: 3
template:
metadata:
labels:
app: appXXX
department: XXXX
spec:
containers:
- name: hello
image: "IMAGEXXX
Created the deployment using below command
Kubectl apply -f filename.yaml
To check the deployment pod kubectl get pods
I have created the manifest file for service type of clusterIP
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
spec:
type: ClusterIP #Type of port(NODE/load balancer/cluster)
selector:
app: APPXXXX
department: XXXX
ports:
- protocol: TCP
port: 8000
targetPort: 11112
To deploy the file and check the services which we have created use the below commands
kubectl apply -f
kubectl get svc
To view the service kubectl get service my-np-service --output yaml
If the nodes in the cluster have an external IP address use the below command
kubectl get nodes --output wide
We can check with the Loadbalancer/cluster/nodeport IP weather servoce are exposed to 11112 port ot not
IPaddress:11112
We can also use the below command to expose the service to 11112 port
kubectl expose deployment deployment_name --name service_name \
--type Cluster_IP --protocol TCP --port 80 --target-port 8080
Related
The same deployment and service yaml files are working properly when I am using a standard image from docker like nginx and set it's containerPort to default port of nginx i.e. 80 but when I am changing it's container port to 8080 then also I am getting the same issue.
My deployment.yaml file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-test-deployment
labels:
app: my-test-app
spec:
replicas: 1
selector:
matchLabels:
app: my-test-app
template:
metadata:
labels:
app: my-test-app
spec:
containers:
- name: my-test-container
image: javapoccr.azurecr.io/sushant-saurav/my-nest-app-with-docker
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: acr-details
My service.yaml -
apiVersion: v1
kind: Service
metadata:
name: my-test-service
labels:
app: my-test-app
spec:
selector:
app: my-test-app
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
There are two quick things that I would check/verify:
Is the test app configured to listen on 8080? The containerPort/targetPort should match what the app is configured to listen on.
Ensure that you have the most recent image. Without a tag, you are using :latest. But if you update that, the imagePullPolicy will not pull the new image, if it has an older one. I'd recommend changing the imagePullPolicy to Always
-Dave
I am not sure why external access is not working, it seems like I followed all tutorials I could find to a T.
In my final docker image i do the following:
EXPOSE 80
EXPOSE 443
This is my deployment script which deploys my app and a load balancer service. Everything seems to boot up ok. I can tell my .NET Core application is running on port 80 b/c I can get live logs using the azure portal. The load balancer finds the pods from the deployment and shows the appropriate mappings but i still am unable to access them externally. Cannot access in browser, ping nor telnet.
apiVersion: apps/v1
kind: Deployment
metadata:
name: 2d69-deployment
labels:
app: 2d69-deployment
spec:
replicas: 1
selector:
matchLabels:
app: 2d69
template:
metadata:
labels:
app: 2d69
spec:
containers:
- name: 2d69
image: 2d69containerregistry.azurecr.io/2d69:latest
ports:
- containerPort: 80
volumeMounts:
- name: keyvault-cert
mountPath: /etc/keyvault
readOnly: true
volumes:
- name: keyvault-cert
secret:
secretName: keyvault-cert
---
kind: Service
apiVersion: v1
metadata:
name: 2d69
labels:
app: 2d69
spec:
selector:
app: 2d69
type: LoadBalancer
ports:
- port: 80
targetPort: 80
deployment description:
kubectl -n 2d69 describe deployment 2d69
Name: 2d69
Namespace: 2d69
CreationTimestamp: Fri, 11 Dec 2020 13:23:24 -0500
Labels: app=2d69
deployment.kubernetes.io/revision: 9
Selector: app=okrx2d69
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=okrx2d69
Containers:
2d69:
Image: 2d69containerregistry.azurecr.io/2d69:5520
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/etc/keyvault from keyvault-cert (ro)
Volumes:
keyvault-cert:
Type: Secret (a volume populated by a Secret)
SecretName: keyvault-cert
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: 2d69-5dbcff8b94 (2/2 replicas created)
Events: <none>
service description:
kubectl -n 2d69 describe service 2d69
Name: 2d69
Namespace: 2d69
Labels: app=2d69
Selector: app=2d69
Type: LoadBalancer
IP: ***.***.14.208
LoadBalancer Ingress: ***.***.***.***
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32112/TCP
Endpoints: ***.***.9.103:443,***.***.9.47:443
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31408/TCP
Endpoints: ***.***.9.103:80,***.***.9.47:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com
I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.