How to specify OpenShift image when creating a Job - openshift-enterprise

Under OpenShift 3.3, I'm attempting to create a Job using the oc command line tool (which apparently lacks argument-based support for Job creation), but I'm having trouble understanding how to make use of an existing app's image stream. For example, when my app does an S2I build, it pushes to the app:latest image stream. I want the Job I'm attempting to create to be run in the context of a new job-specific pod using my app's image stream. I've prepared a test Job using this YAML:
---
apiVersion: batch/v1
kind: Job
metadata:
name: myapp-test-job
spec:
template:
spec:
restartPolicy: Never
activeDeadlineSeconds: 30
containers:
- name: myapp
image: myapp:latest
command: ["echo", "hello world"]
When I create the above Job using oc create -f job.yaml, OpenShift fails to pull myapp:latest. If I change image: myapp:latest to image: 172.30.194.141:5000/myapp/myapp:latest (and in doing so, specify the host and port of my OpenShift instance's internal Docker registry), this works, but I'd rather not specify this as it seems like introducing a dependency on an OpenShift implementation detail. Is there a way to make OpenShift Jobs use images from an existing app without depending on such details?
The documentation shows image: perl, but it's unclear on how to use a Docker image built and stored within OpenShift.

I learned that you simply cannot yet use an ImageStream with a Job unless you specify the full address to the internal OpenShift Docker registry. Relevant GitHub issues:
https://github.com/openshift/origin/issues/13042
https://github.com/openshift/origin/issues/13161
https://github.com/openshift/origin/issues/12672

Related

Azure DevOps Build Agents in Kubernetes

We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.
Details:
Azure Devops Server
AKS- 1.19.11
Looking for
AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.
The scaling of pods should happen as the load from the ADO will be initiating
Is there any default MS provided image available currently for the build agents?
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Any suggestions highly appreciated
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Add tools and customize the container
Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.
Note: Docker was replaced with containerd in Kubernetes 1.19, and Docker-in-Docker became unavailable. A few use cases to run docker inside a docker container:
One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
Sandboxed environments.
For experimental purposes on your local development workstation.
If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown here or run the agent in an alternative docker environment as shown here.
Else if you are deploying the self hosted agent on AKS, the azdevops-deployment Deployment at step 4, here, must be changed to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
The scaling of pods should happen as the load from the ADO will be initiating
You can use cluster-autoscaler and horizontal pod autoscaler. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [Reference]

kubectl get pods command shows "ErrImageNeverPull"

I have uploaded my image on ACR. When I try to deploy it using a deployment.yaml with kubectl commands, the kubectl get pods command shows ErrImageNeverPull in the pods.
Also, I am not using minikube. Is it necessary to use minikube for this?
I am a beginner in azure/kubernetes.
I've also used imagePullPolicy: Never in the yaml file. It's not working even without this and shows ImagePullBackOff.
As Payal Jindal mentioned in the comment:
It worked fine. There was a problem with my docker installation.
Problem is now resolved. The way forward is to set the image pull policy to IfNotPresent or Always.
spec:
containers:
- imagePullPolicy: Always

Unable to push Docker Container to Azure Kubernetes Service from Jenkins job build

I am new to Azure and Kubernetes and was trying out the following tutorial at https://learn.microsoft.com/en-us/azure/developer/jenkins/deploy-from-github-to-aks#create-a-jenkins-project, however at the last part to deploy the docker to AKS I was unable to do so and faced with errors. I am not familiar with the kubectl set image command and have been going around the web to look for solutions but to no avail. I would appreciate if you could share your knowledge if you have experience the following issue previously.
The following is the configuration: (NOTE: The docker image is able to push to ACR successfully)
The following is the error following the jenkins build job:
Most probably you missed in the initial article you provided the steps, where they deploy app before Jenkin usage.
Look, first of all they Deploy azure-vote-front application to AKS
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
And of course Jenkins will see this deployment during kubectl set image deployment/azure-vote-front azure-vote-front=$WEB_IMAGE_NAME --kubeconfig /var/lib/jenkins/config
So please, create a deployment first as #mmking and common sense suggest.

terraform - mounting a directory in yaml

i am managing instances on goole cloud platform and deploying the docker image into GCP by using terraform script. The problem that I have now with the Terraform script is mounting a host directory into a docker container when docker image is started.
If I can manually run a docker command, i can do something like this.
docker run -v <host_dir>:<container_local_path> -it <image_id>
But I need to configure the mount directory in the Terraform Yaml. This is my Terraform YAML file.
spec:
containers:
- name: MyDocker
image: "docker_image_name"
ports:
- containerPort: 80
hostPort: 80
I have a directory (/www/project/data) in the host machine. This directory needs to be mounted into the docker container.
Does anybody know how to mount this directory into this yaml file?
Or any workaround is appreciated.
Thanks.
I found an annswer. please make sure 'dataDir' name has to match between 'volumeMounts and volumes'.
volumeMounts:
- name: 'dataDir'
mountPath: '/data/'
volumes:
- name: 'dataDir'
hostPath:
path: '/tmp/logs'
I am assuming that you are loading Docker images into a Container based Compute Engine. My recommendation is to determine your recipe for creating your GCE image and mounting your disk manually using the GCP console. The following will give guidance on that task.
https://cloud.google.com/compute/docs/containers/configuring-options-to-run-containers#mounting_a_host_directory_as_a_data_volume
Once you are able to achieve your desired GCP environment by hand, there appears to be a recipe for translating this into a Terraform script as documented here:
https://github.com/terraform-providers/terraform-provider-google/issues/1022
The high level recipe seems to be the recognition that the configuration of docker commands and specification is found in Metadata of the Compute Engine configuration. We can find the desired metadata by running the command manually and looking at the REST request that would achieve that. Once we know the metadata, we can transcribe that into the equivalent settings in the terraform script to be added by Terraform.

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

Resources