I'm following this Microsoft Tutorial to create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure Cli. In the Run the Application section of this turorial, I get the following error when running the following command to deploy the application using YAML config file:
kubectl apply -f sample.yaml
error: error validating "sample.yaml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
Question: As shown in the following sample.yaml file, the apiVersion is already set. So what this error is about and how can we fix the issue?
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample
labels:
app: sample
spec:
replicas: 1
template:
metadata:
name: sample
labels:
app: sample
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: sample
image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
selector:
matchLabels:
app: sample
---
apiVersion: v1
kind: Service
metadata:
name: sample
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: sample
Issue resolved. The issue was related to copy/paste to Azure Cloud Shell. When you copy/paste content to vi editor in Azure Cloud Shell and if the content's first letter happens to be a then following may happen:
when opened vi in read mode, then by pasting, the first a may put user in edit mode and may not actually get that a inserted in the editor. So, in my case the content was pasted as follows (I'm only showing the first few lines here for brevity). So you notice here a was missing in the first line apiVersion: apps/v1 below:
sample.yaml file:
piVersion: apps/v1
kind: Deployment
metadata:
…..
...
This happens when you use an outdated kubectl. Can you try updating to 1.2.5 or 1.3.0 and run it again
I fixed it in my case! For more context, feel free to visit here.
Summary:
If there is any file in which you are applying the yaml configs as follows:
kubectl apply -f .
then change that to the following:
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
Basically, apply configs separately with each file.
Related
My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?
Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200
I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.
so in Docker, I can do a Docker run -e to pass in environment variables.
But how does one do that for Azure Kubernetes Pods? They aren't username/password kinds of variables but more so URLs segments we would want to use during startup.
http://webapi/august where august is what we would want to pass in, then in September, we would want to pass in september.
This aren't the best examples, but it shows what I'm looking for.
Thanks.
There is a clear example on kubernetes documentation for this - https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Short example from there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Take a note of env
later if you want to change the variable on the fly - you can use kubectl set env -h command
I have a private Azure Container registry that contains two containers, a windows based (mcr.microsoft.com/dotnet/core/samples:aspnetapp) and a linux based (a custom test). I created a secret etc. which seems ok. When I try to deploy those with kubernetes the following happens:
The linux based from the private repo starts normally
The windows based container from docker hub starts normally
The SAME windows based container from the private repo throws an error : Back-off pulling image "spintheblackcircleshop.azurecr.io/aspnetapp"
Anyone?
-
test.yaml:
apiVersion: v1
items:
# basplus deployment
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private
spec:
replicas: 1
template:
metadata:
labels:
app: private
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-public
spec:
replicas: 1
template:
metadata:
labels:
app: public
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private-sleep
spec:
replicas: 1
template:
metadata:
labels:
app: private-sleep
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/danielm-test-sleep
imagePullSecrets:
- name: mysecret
# end
kind: List
metadata: {}
AKS doesnt support windows nodes yet. There is no way to run windows containers in AKS at the time of writing (05/05/2019).
edit: fair point raised by the other answer. you actually can run windows containers in aci in aks, but it's not exactly in aks :)
Well, AKS does not support windows node currently, but you can just run windows container in it when you install the virtual kubelet in the AKS. It takes advantage of the ACI.
See the steps that install the virtual kubelet and run windows container in the document Use Virtual Kubelet with Azure Kubernetes Service (AKS).
I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!