The Deployment "nodejs-deployment" is invalid spec.template.metadata.labels: Invalid value - node.js

I'm new in Kubernetes and I was tring to deploy a nodejs service to kubernetes. For that I created a docker image and upload it to dockerhub and finally I created a deployment file that contains all required configurations in order to accomplish the deployment.
The deployment file is shown above. I then executed the command 'kubectl apply -f deployment_local.yaml' and I came across with this error: "*spec.template.metadata.labels:Invalid value map[string]string{"app":"nodejs\u00a0\u00a0"}:selector does not match template labels"
I'm tring to fix this bug but I could not fix it. Pls help understand this error because I'm strugglying for a lot of time.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs  
spec:
containers:
- name: nodeapp
image: lucasseabra/nodejs-starter
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: nodejs
ports:
- port: 3000
targetPort: 3000
nodePort: 30001

As the error message was trying to tell you, there are two "non-breaking space" characters after nodejs: map[string]string{"app":"nodejs\u00a0\u00a0"}
I would guess it was a side-effect of copy-pasting from a webpage
If you even do a "select all" on your posted question here, you'll see that SO has converted the two characters into normal spaces, but they do show up in the selection extension past the "nodejs" text
If your editor is not able to show you the characters, then either manually retype the labels, or try copying this (which is just yours but with trailing spaces removed)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- name: nodeapp
image: lucasseabra/nodejs-starter
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: nodejs
ports:
- port: 3000
targetPort: 3000
nodePort: 30001

Related

I am trying to deploy my docker image from ACR to AKS. The pods are getting created properly but getting ERR_CONNECTION_TIMED_OUT through external IP

The same deployment and service yaml files are working properly when I am using a standard image from docker like nginx and set it's containerPort to default port of nginx i.e. 80 but when I am changing it's container port to 8080 then also I am getting the same issue.
My deployment.yaml file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-test-deployment
labels:
app: my-test-app
spec:
replicas: 1
selector:
matchLabels:
app: my-test-app
template:
metadata:
labels:
app: my-test-app
spec:
containers:
- name: my-test-container
image: javapoccr.azurecr.io/sushant-saurav/my-nest-app-with-docker
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: acr-details
My service.yaml -
apiVersion: v1
kind: Service
metadata:
name: my-test-service
labels:
app: my-test-app
spec:
selector:
app: my-test-app
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
There are two quick things that I would check/verify:
Is the test app configured to listen on 8080? The containerPort/targetPort should match what the app is configured to listen on.
Ensure that you have the most recent image. Without a tag, you are using :latest. But if you update that, the imagePullPolicy will not pull the new image, if it has an older one. I'd recommend changing the imagePullPolicy to Always
-Dave

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

How to get a external IP of VM running Kubernet Services

I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

ERR_NAME_NOT_RESOLVED:Angular pod not communicating with python backend in Kubernetes

I have deployed angular frontend and python backend in kubernetes via microk8s as separate pods and they are running. I have given backend url as 'http://backend-service.default.svc.cluster.local:30007' in my angular file in order to link frontend with backend. But this is raising ERR_NAME_NOT_RESOLVED. Can someone help me in understanding the issue?
Also, I have a config file which specifies the ip's ports and other configurations in my backend. Do I need to make any changes(value of database host?, flask host?, ports? ) to that file before deploying t to kubernetes?
Shown below is my deployment and service files of angular and backend.
apiVersion: v1
kind: Service
metadata:
name: angular-service
spec:
type: NodePort
selector:
app: angular
ports:
- protocol: TCP
nodePort: 30042
targetPort: 4200
port: 4200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
labels:
name: angular
spec:
replicas: 1
selector:
matchLabels:
name: angular
template:
metadata:
labels:
name: angular
spec:
containers:
- name: angular
image: angular:local
ports:
- containerPort: 4200
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type:ClusterIP
selector:
name: backend
ports:
- protocol: TCP
targetPort: 7000
port: 7000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
name: backend
template:
metadata:
labels:
name: backend
spec:
containers:
- name: backend
image: flask:local
ports:
- containerPort: 7000
Is your cluster in a healthy state ? DNS are resolved by object coredns in kube-system namespace.
In a classic way your angular app should show up your API Url in your browser so they must exposed and public. It is not your case and I have huge doubts about this.
Expose us your app architecture?
Moreover if you expose your service though NodePort you must not use it for internal access because you never know the node you will access.
When exose a service your apps need to use the port attribute (not the nodeport) to access pod generated in backend.

Kubernetes - SSL docker Nodejs

I am new to K8s and this is my first time trying to get to grips with it. I am trying to set up a basic Nodejs Express API using this deployment.yml
kind: Service
apiVersion: v1
metadata:
name: ${GCP_PROJECT_NAME}
spec:
selector:
app: ${GCP_PROJECT_NAME}
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
loadBalancerIP: ${STATIC_IP_ADDRESS}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ${GCP_PROJECT_NAME}
labels:
app: ${GCP_PROJECT_NAME}
spec:
replicas: 1
selector:
matchLabels:
app: ${GCP_PROJECT_NAME}
template:
metadata:
labels:
app: ${GCP_PROJECT_NAME}
spec:
containers:
- name: ${GCP_PROJECT_NAME}
image: gcr.io/${GCP_PROJECT_ID}/${GCP_PROJECT_NAME}:${CIRCLE_SHA1}
ports:
- name: http
containerPort: 3000
protocol: TCP
env:
- name: MONGO_URL_PROD
value: $MONGO_URL_PROD
Everything works great with this setup and deploys to Kubernetes. When I hit my endpoint i.e. http://123.345.333.123 as expected there is no SSL.
I generated my SSL certificates and tried to follow this tutorial [https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs] but I wasn't able to. Could anyone point me in the right direction, what am I doing wrong or what am I missing?
You can use nginx ingress controller to handle all your SSL setup and usage. Following is a step by step guide to do so:
https://dgkanatsios.com/2017/07/07/using-ssl-for-a-service-hosted-on-a-kubernetes-cluster/
Hope this helps.
This approach didnt work for me. Ingress was not able to get the Cluster IP, it shows <none>

Resources