How to get a external IP of VM running Kubernet Services - azure

I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?

you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

Related

I am trying to deploy my docker image from ACR to AKS. The pods are getting created properly but getting ERR_CONNECTION_TIMED_OUT through external IP

The same deployment and service yaml files are working properly when I am using a standard image from docker like nginx and set it's containerPort to default port of nginx i.e. 80 but when I am changing it's container port to 8080 then also I am getting the same issue.
My deployment.yaml file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-test-deployment
labels:
app: my-test-app
spec:
replicas: 1
selector:
matchLabels:
app: my-test-app
template:
metadata:
labels:
app: my-test-app
spec:
containers:
- name: my-test-container
image: javapoccr.azurecr.io/sushant-saurav/my-nest-app-with-docker
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: acr-details
My service.yaml -
apiVersion: v1
kind: Service
metadata:
name: my-test-service
labels:
app: my-test-app
spec:
selector:
app: my-test-app
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
There are two quick things that I would check/verify:
Is the test app configured to listen on 8080? The containerPort/targetPort should match what the app is configured to listen on.
Ensure that you have the most recent image. Without a tag, you are using :latest. But if you update that, the imagePullPolicy will not pull the new image, if it has an older one. I'd recommend changing the imagePullPolicy to Always
-Dave

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

ERR_NAME_NOT_RESOLVED:Angular pod not communicating with python backend in Kubernetes

I have deployed angular frontend and python backend in kubernetes via microk8s as separate pods and they are running. I have given backend url as 'http://backend-service.default.svc.cluster.local:30007' in my angular file in order to link frontend with backend. But this is raising ERR_NAME_NOT_RESOLVED. Can someone help me in understanding the issue?
Also, I have a config file which specifies the ip's ports and other configurations in my backend. Do I need to make any changes(value of database host?, flask host?, ports? ) to that file before deploying t to kubernetes?
Shown below is my deployment and service files of angular and backend.
apiVersion: v1
kind: Service
metadata:
name: angular-service
spec:
type: NodePort
selector:
app: angular
ports:
- protocol: TCP
nodePort: 30042
targetPort: 4200
port: 4200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
labels:
name: angular
spec:
replicas: 1
selector:
matchLabels:
name: angular
template:
metadata:
labels:
name: angular
spec:
containers:
- name: angular
image: angular:local
ports:
- containerPort: 4200
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type:ClusterIP
selector:
name: backend
ports:
- protocol: TCP
targetPort: 7000
port: 7000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
name: backend
template:
metadata:
labels:
name: backend
spec:
containers:
- name: backend
image: flask:local
ports:
- containerPort: 7000
Is your cluster in a healthy state ? DNS are resolved by object coredns in kube-system namespace.
In a classic way your angular app should show up your API Url in your browser so they must exposed and public. It is not your case and I have huge doubts about this.
Expose us your app architecture?
Moreover if you expose your service though NodePort you must not use it for internal access because you never know the node you will access.
When exose a service your apps need to use the port attribute (not the nodeport) to access pod generated in backend.

Why Azure AKS Service IP address is not accessible

I am working with aks service. Started with a tutorial on Azure that deploys Azure Voting app.
Then I created my app. It is a Restful service. created a container image. Now when I deploy my service, the public service end point is not accessible. Not only the app does not respond, traceroute command take me to msdn network but not to the ip address, neither is it pingable.
Here is the tutorial URL from which I took the sample for the front end deployment and service yaml and works fine.
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal
Here is my yaml. What am I doing wrong.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bwce-simplerest
spec:
replicas: 1
selector:
matchLabels:
app: bwce-simplerest
template:
metadata:
labels:
app: bwce-simplerest
spec:
containers:
- name: bwce-simplerest
image: tauqirghani/simplerest:1.0
ports:
- containerPort: 7070
---
apiVersion: v1
kind: Service
metadata:
name: bwce-simplerest
spec:
type: LoadBalancer
ports:
- port: 80
protocol: "TCP"
targetPort: 7070
selector:
app: bwce-simplerest

Pods do not resolve the domain names of a service through ingress

I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.

Resources