I'm getting this on Azure CI when running Cypress in a docker container in a release pipeline. I'm guessing it could be because of some setting in Cypress, docker, NodeJS, or the CI itself.
![
]1
I tried the NO_COLOR=1 Cypress option in the pipeline, but it didn't seem to have an effect.
The environment variables set in the pipeline will not be active inside the Docker container. The Docker container has its own set of environment variables.
I believe you will be able to resolve your problem by adding
ENV NO_COLOR=1
in the Dockerfile
Related
I'm using an Azure App Service to run my docker image. Running my docker container requires using a couple of environment variables. Locally, when I run my container I just do something like:
docker run -e CREDENTIAL -e USERNAME myapp
However, in the Azure App Service, after defining CREDENTIAL and USERNAME as Application Settings, I'm unsure how to pass these to the container. I see from the logs that on startup Azure passes some of its own environment variables, but if I add a startup command with my environment variables, it tacks it on at the end of the one generated by Azure creating an invalid command. How can I pass mine to the container?
As I understand you want to set environment variables in that docker container with -e option.
You don't need to use startup command for that. Pass these variables as application settings:
Application Settings are exposed as environment variables for access by your application at runtime.
Documentation
I run tests inside docker container on a shell runner (NodeJS/jest);
How can I fail gitlab pipeline if tests fail from inside the container?
I've tried process.exit(1) in case there are failing tests but it didn't help.
I considered running tests as part of Dockerfile RUN but I need the
environment variables to configure URLs to other containers.
Since all tests are http calls to other containers I also expect that after_script will work for tearing down the environment regardless of the outcome.
When using docker compose, you'll need to use the --exit-code-from to specify the service that should be used for the exit code of the docker-compose command.
See also: https://stackoverflow.com/a/43367250/5747944
Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.
In general, I would start the docker instance on my local machine like docker run -t -i -e 'a=b' ...
Now, I would like to deploy and run my custom docker image which I uploaded to the Docker Container Registry before and start it like the command above - with environment variables.
Checking the Azure CLI for WebApps you can see that setting environment variables in general should be possible. But for me it seems this "environment variables" are not the environment variables which are passed to the docker command. Why? Checking the container protocol I can see how the docker container is started. There are no environment variables set.
With Azure Container, it would work like this az container create ... --environment-variables a=b. These environment variables are passed down to the container/docker. And this is exactly what I am searching for WebApps.
Does anyone have some experience in deploying Azure Webapps with customer Docker instances started with environment variables?
I guess I found the solution for the problem:
App Settings are injected into your app as environment variables at runtime.
If you need to set an environment variable for your application, simply add an App Setting in the Azure portal. When your app runs, we will inject the app setting into the process as an environment variable automatically.
How it works via CLI:
az webapp config appsettings set --name <mycontainername> --resource-group <myresourcegroupname> --settings a='b'
Setting all environment variables via CLI like the command above worked for me. The same is possible via the portal UI in app settings. If you check how Azure starts the Docker instance, you will see that none of the set environment variables are set during startup (like.docker run -d -p 3287:3000 --name <mycontainername -e a=b) but if you login to the Docker container and run an echo command for the environment variable, you will see that the environment variable has been injected.
Note: Maybe you have to restart the Docker instance in order to have the new environment variables injected.
Steps followed during rolling updates:
Create an image for the v2 version of the application with some changes
Re-Build a Docker Image with Maven. pom.xml. Run command in SSH or Cloud Shell:
docker build -t gcr.io/satworks-1/springio/gs-spring-boot-docker:v2 .
Push the new updated docker image to the Google Container Registry. Run command in SSH or Cloud Shell
gcloud docker -- push gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Apply a rolling update to the existing deployment with an image update. Run command in SSH or Cloud Shell
kubectl set image deployment/spring-boot-kube-deployment-port80 spring-boot-kube-deployment-port80=gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Revalidate the application again through curl or browser
curl 35.227.108.89
and observe the changes take effect.
When do we come across the "CrashLoopBackOff" error and how can we resolve this issue? Does it happen at application level or at kubernetes pods level?