I am learning azure kubernetes service and trying to deploy an simple app to AKS. I upload the image to ACR. That is a very simple app of only one html file. I will write down all the files
My HTMl file is
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Simple</title>
</head>
<body style="background-color:rgb(212, 240, 234);">
<h1>Simple Application</h1>
</body>
</html>
My Docker file is as below. only two lines.
FROM nginx
COPY index.html /usr/share/nginx/html
My deployment file is again simple
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: simplecc24.azurecr.io/simple:latest
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-loadbalancer-service
labels:
app: app1-nginx
spec:
type: LoadBalancer
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
When I deploy my yaml file. It has following outcomes.
➜ simple kubectl get no
NAME STATUS ROLES AGE VERSION
aks-agentpool-42782457-vmss000000 Ready agent 8m38s v1.23.8
aks-agentpool-42782457-vmss000001 Ready agent 8m52s v1.23.8
But when I check the status of pods it shows this
➜ simple kubectl get po
NAME READY STATUS RESTARTS AGE
app1-nginx-deployment-7576f5c78b-j59cf 0/1 CrashLoopBackOff 1 (13s ago) 20s
When i tried to go in details for my pods always I am getting this response.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m59s default-scheduler Successfully assigned default/app1-nginx-deployment-7576f5c78b-j59cf to aks-agentpool-42782457-vmss000001
Normal Pulled 7m54s kubelet Successfully pulled image "simplecc24.azurecr.io/simple" in 4.199038838s
Normal Pulled 7m52s kubelet Successfully pulled image "simplecc24.azurecr.io/simple" in 146.207115ms
Normal Pulled 7m37s kubelet Successfully pulled image "simplecc24.azurecr.io/simple" in 238.000761ms
Normal Created 7m10s (x4 over 7m54s) kubelet Created container app1-nginx
Normal Pulled 7m10s kubelet Successfully pulled image "simplecc24.azurecr.io/simple" in 157.27582ms
Normal Started 7m9s (x4 over 7m54s) kubelet Started container app1-nginx
Normal Pulling 6m29s (x5 over 7m58s) kubelet Pulling image "simplecc24.azurecr.io/simple"
Normal Pulled 6m29s kubelet Successfully pulled image "simplecc24.azurecr.io/simple" in 178.361528ms
Warning BackOff 2m53s (x25 over 7m51s) kubelet Back-off restarting failed container
Please help me out in this problem.
Related
So I've read a bunch of these similar questions/issues on stackoverflow and I understand it enough but not sure what I am missing.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: dev-namespace
labels:
web: nginx
spec:
replicas: 2
selector:
matchLabels:
web: nginx
template:
metadata:
labels:
web: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
service.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
web: nginx
ports:
- protocol: TCP
port: 80
targetPort: 8080
This is my minikube ip:
$ minikube ip
192.168.49.2
This is the service
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.104.139.228 <none> 80:30360/TCP 14
This is the deployment
$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 14h
This is the pods
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5b78696cc8-9fpmr 1/1 Running 0 14h 172.17.0.6 minikube <none> <none>
nginx-deployment-5b78696cc8-h4m72 1/1 Running 0 14h 172.17.0.4 minikube <none> <none>
This is the endpoints
$ kubectl get endpoints
NAME ENDPOINTS AGE
nginx-service 172.17.0.4:8080,172.17.0.6:8080 14h
But when I try to curl 10.104.139.228:30360 it just hangs. When I try to curl 192.168.49.2:30360 I get the Connection refused
I am sure that using NodePort means I need to use the node ip and that would be the server's local IP since I am using minikube and control plane and worker are in the same server.
What am I missing here? Please help, this is driving me crazy. I should mention that I am able to kubectl exec -ti pod-name -- /bin/bash and if I do a curl localhost I do get the famous response "Welcome to NGINX"
Nevermind :/ I feel very foolish I see that the mistake was the container ports :/ my nginx pods are listening on port 80 not port 8080
For anyone out there, I updated my config files to this:
service.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: busy-qa
spec:
type: NodePort
selector:
web: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: busy-qa
labels:
web: nginx
spec:
replicas: 2
selector:
matchLabels:
web: nginx
template:
metadata:
labels:
web: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Now when I curl I get the NGINX response
$ curl 192.168.49.2:31168
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
I have an Angular Universal application with the following Dockerfile:
FROM node:14-alpine
WORKDIR /app
COPY package.json /app
COPY dist/webapp /app/dist/webapp
ENV NODE_ENV "production"
ENV PORT 80
EXPOSE 80
CMD ["npm", "run", "serve:ssr"]
And I can deploy it to a Kubernetes cluster just fine but it keeps getting restarted every 10 minutes or so:
NAME READY STATUS RESTARTS AGE
api-xxxxxxxxx-xxxxx 1/1 Running 0 48m
webapp-xxxxxxxxxx-xxxxx 1/1 Running 232 5d19h
Pod logs are clean and when I describe the pod I just see:
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 22 Sep 2020 15:58:27 -0300
Finished: Tue, 22 Sep 2020 16:20:31 -0300
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 3m31s (x233 over 5d19h) kubelet, pool-xxxxxxxxx-xxxxx Created container webapp
Normal Started 3m31s (x233 over 5d19h) kubelet, pool-xxxxxxxxx-xxxxx Started container webapp
Normal Pulled 3m31s (x232 over 5d18h) kubelet, pool-xxxxxxxxx-xxxxx Container image "registry.gitlab.com/..." already present on machine
This is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
labels:
app: webapp
tier: frontend
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
selector:
matchLabels:
app: webapp
tier: frontend
template:
metadata:
labels:
app: webapp
tier: frontend
spec:
imagePullSecrets:
- name: gitlab-registry
containers:
- name: $CI_PROJECT_NAME
image: $IMAGE_TAG
ports:
- containerPort: 80
How can I tell the reason it keeps restarting? Thanks!
I am new to Azure kubernetes. I'm trying to deploy a simple .net core web api to Azure Kubernetes. I just created the default weather project in VS 2019. I am able to run it in Docker locally fine. I am also able to push the image to a Azure container repository without a problem.
I get the error when I do:
kubectl apply -f .\deployment.yml
When I run kubectl get pods after the deploy I see this.
| NAME | READY | STATUS |
RESTARTS | AGE |
| test-deployment-7564d94c8f-fdz9q | 0/1 | ImagePullBackOff |
0 | 74s |
so then I ran kubectl describe pod test-deployment-7564d94c8f-fdz9q
and these are the errors coming back
Warning Failed (x4 over 15s) kubelet,
aks-agentpool-30270636-vmss000000
Failed to pull image
"ipaspoccontreg.azurecr.io/test:dev": [rpc error: code = Unknown desc
= image operating system "windows" cannot be used on this platform, rpc error:code = Unknown desc = Error response from daemon: Get
https://ipaspoccontreg.azurecr.io/v2/test/manifests/dev: unauthorized:
authentication required]
My deployment.yml is
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-container
image: ipaspoccontreg.azurecr.io/test:dev
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
and my service.yml is
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- port: 8080
targetPort: 80
type: LoadBalancer
You need to create a secret in Kubernetes which will contain your container registry credentials.
I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.
I'm following this Microsoft Tutorial to create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure Cli. In the Run the Application section of this turorial, I get the following error when running the following command to deploy the application using YAML config file:
kubectl apply -f sample.yaml
error: error validating "sample.yaml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
Question: As shown in the following sample.yaml file, the apiVersion is already set. So what this error is about and how can we fix the issue?
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample
labels:
app: sample
spec:
replicas: 1
template:
metadata:
name: sample
labels:
app: sample
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: sample
image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
selector:
matchLabels:
app: sample
---
apiVersion: v1
kind: Service
metadata:
name: sample
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: sample
Issue resolved. The issue was related to copy/paste to Azure Cloud Shell. When you copy/paste content to vi editor in Azure Cloud Shell and if the content's first letter happens to be a then following may happen:
when opened vi in read mode, then by pasting, the first a may put user in edit mode and may not actually get that a inserted in the editor. So, in my case the content was pasted as follows (I'm only showing the first few lines here for brevity). So you notice here a was missing in the first line apiVersion: apps/v1 below:
sample.yaml file:
piVersion: apps/v1
kind: Deployment
metadata:
…..
...
This happens when you use an outdated kubectl. Can you try updating to 1.2.5 or 1.3.0 and run it again
I fixed it in my case! For more context, feel free to visit here.
Summary:
If there is any file in which you are applying the yaml configs as follows:
kubectl apply -f .
then change that to the following:
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
Basically, apply configs separately with each file.