kubectl get pods command shows "ErrImageNeverPull" - azure

I have uploaded my image on ACR. When I try to deploy it using a deployment.yaml with kubectl commands, the kubectl get pods command shows ErrImageNeverPull in the pods.
Also, I am not using minikube. Is it necessary to use minikube for this?
I am a beginner in azure/kubernetes.
I've also used imagePullPolicy: Never in the yaml file. It's not working even without this and shows ImagePullBackOff.

As Payal Jindal mentioned in the comment:
It worked fine. There was a problem with my docker installation.
Problem is now resolved. The way forward is to set the image pull policy to IfNotPresent or Always.
spec:
containers:
- imagePullPolicy: Always

Related

Unable to push Docker Container to Azure Kubernetes Service from Jenkins job build

I am new to Azure and Kubernetes and was trying out the following tutorial at https://learn.microsoft.com/en-us/azure/developer/jenkins/deploy-from-github-to-aks#create-a-jenkins-project, however at the last part to deploy the docker to AKS I was unable to do so and faced with errors. I am not familiar with the kubectl set image command and have been going around the web to look for solutions but to no avail. I would appreciate if you could share your knowledge if you have experience the following issue previously.
The following is the configuration: (NOTE: The docker image is able to push to ACR successfully)
The following is the error following the jenkins build job:
Most probably you missed in the initial article you provided the steps, where they deploy app before Jenkin usage.
Look, first of all they Deploy azure-vote-front application to AKS
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
And of course Jenkins will see this deployment during kubectl set image deployment/azure-vote-front azure-vote-front=$WEB_IMAGE_NAME --kubeconfig /var/lib/jenkins/config
So please, create a deployment first as #mmking and common sense suggest.

terraform - mounting a directory in yaml

i am managing instances on goole cloud platform and deploying the docker image into GCP by using terraform script. The problem that I have now with the Terraform script is mounting a host directory into a docker container when docker image is started.
If I can manually run a docker command, i can do something like this.
docker run -v <host_dir>:<container_local_path> -it <image_id>
But I need to configure the mount directory in the Terraform Yaml. This is my Terraform YAML file.
spec:
containers:
- name: MyDocker
image: "docker_image_name"
ports:
- containerPort: 80
hostPort: 80
I have a directory (/www/project/data) in the host machine. This directory needs to be mounted into the docker container.
Does anybody know how to mount this directory into this yaml file?
Or any workaround is appreciated.
Thanks.
I found an annswer. please make sure 'dataDir' name has to match between 'volumeMounts and volumes'.
volumeMounts:
- name: 'dataDir'
mountPath: '/data/'
volumes:
- name: 'dataDir'
hostPath:
path: '/tmp/logs'
I am assuming that you are loading Docker images into a Container based Compute Engine. My recommendation is to determine your recipe for creating your GCE image and mounting your disk manually using the GCP console. The following will give guidance on that task.
https://cloud.google.com/compute/docs/containers/configuring-options-to-run-containers#mounting_a_host_directory_as_a_data_volume
Once you are able to achieve your desired GCP environment by hand, there appears to be a recipe for translating this into a Terraform script as documented here:
https://github.com/terraform-providers/terraform-provider-google/issues/1022
The high level recipe seems to be the recognition that the configuration of docker commands and specification is found in Metadata of the Compute Engine configuration. We can find the desired metadata by running the command manually and looking at the REST request that would achieve that. Once we know the metadata, we can transcribe that into the equivalent settings in the terraform script to be added by Terraform.

Getting 'didn't match node selector' when running Docker Windows container in Azure AKS

In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.

How to upload a file to kubernetes cluster for my Apps to access it?

Lets say we have an application which accesses a file. This App is a jar which is packaged into an image and pushed to Registry for the Kubernetes to run it. But when we create the Pod, we need to configure a volume also in it. When we specify a volume we give a path, how do we place the file in that volume from lets say our virtual machine?
Please help me in understanding this with an explanation. Also should we create a storage so that its accessible from kubernetes cluster? please explain relevent topic as well to understand this.
Note: we are using azure cli
I think the best approach would be to create a ConfigMap with the data you want to use from your application. Then you just need to mount the ConfigMap as a volume in the Pod's (explained here) that need the data.
You can easily create a ConfigMap from a file like
kubectl create configmap your-configmap-name --from-file=/some/path/to/file
And then mount it in your Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

How to specify OpenShift image when creating a Job

Under OpenShift 3.3, I'm attempting to create a Job using the oc command line tool (which apparently lacks argument-based support for Job creation), but I'm having trouble understanding how to make use of an existing app's image stream. For example, when my app does an S2I build, it pushes to the app:latest image stream. I want the Job I'm attempting to create to be run in the context of a new job-specific pod using my app's image stream. I've prepared a test Job using this YAML:
---
apiVersion: batch/v1
kind: Job
metadata:
name: myapp-test-job
spec:
template:
spec:
restartPolicy: Never
activeDeadlineSeconds: 30
containers:
- name: myapp
image: myapp:latest
command: ["echo", "hello world"]
When I create the above Job using oc create -f job.yaml, OpenShift fails to pull myapp:latest. If I change image: myapp:latest to image: 172.30.194.141:5000/myapp/myapp:latest (and in doing so, specify the host and port of my OpenShift instance's internal Docker registry), this works, but I'd rather not specify this as it seems like introducing a dependency on an OpenShift implementation detail. Is there a way to make OpenShift Jobs use images from an existing app without depending on such details?
The documentation shows image: perl, but it's unclear on how to use a Docker image built and stored within OpenShift.
I learned that you simply cannot yet use an ImageStream with a Job unless you specify the full address to the internal OpenShift Docker registry. Relevant GitHub issues:
https://github.com/openshift/origin/issues/13042
https://github.com/openshift/origin/issues/13161
https://github.com/openshift/origin/issues/12672

Resources