Can't access Azure pipeline variables in my Docker stack Yaml file - azure

I want to deploy a Docker stack from Azure pipelines. I have set some variables, and I am calling these variables in the Docker stack file. However, none of my environment variables are read in the docker stack file. My question: Is there any explanations why I can't read the environment variables in the yaml file?
Below is all my variables
And here is my docker stack configuration
version: "3.1"
services:
postgres:
image: "postgres"
volumes:
- /home/db-postgres:/data/db
environment:
POSTGRES_PASSWORD: ${POSTGRESPASSWORD}
POSTGRES_DB: ${POSTGRESDB}
main:
command: "flask run --host=127.0.0.1"
image: "personal-image"
ports:
- 5000:5000
environment:
SECRET_KEY: ${FLASK_SERIALIZER_SECRET}
JWT_SECRET_KEY: ${FLASK_JWT_SECRET}
FLASK_APP: app.py
MAIL_USERNAME: ${MAIL_USERNAME}
MAIL_PASSWORD: ${MAIL_PASSWORD}
APP_ADMINS: ${APP_ADMINS}
SQLALCHEMY_DATABASE_URI: ${SQLALCHEMY_DATABASE_URI}
From the azure pipeline yaml file, I can read the environment variables though...
What I don't understand is that in an other project, I m doing the exact same thing, and everything works fine.
Edit: Here is my azure-pipelines.yml script. The agent is a self hosted EC2 Linux agent:
steps:
- bash: |
echo $(DOCKERREGISTRY_PASSWORD) | docker login --username $(DOCKERREGISTRY_USER) --password-stdin
displayName: log in to Docker Registry
- bash: |
sudo service docker start
sudo docker stack deploy --with-registry-auth --prune --compose-file stack.staging.yml my_cluster_name
displayName: Deploy Docker Containers
- bash: |
sudo docker system prune --volumes -f
displayName: Clean memory
- bash: |
docker logout
sudo service docker stop
displayName: logout of Docker Registry

You can check the Agent Specification of the yaml pipeline in other projects. They might use the different agents.
I created a test pipeline and found the environment variables in docker stack file were not be able to be substituted in mac or ubuntu agents. But it seemed working in windows agents.
If you used mac or ubuntu agents to run your pipeline. You might need to use define the environment variables in the dockerComposeFileArgs field. See below:
- task: DockerCompose#0
displayName: 'Build services'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: Dockerhost
dockerComposeFileArgs: |
MAIL_USERNAME=$(MAIL_USERNAME)
MAIL_PASSWORD=$(MAIL_PASSWORD)
APP_ADMINS=$(APP_ADMINS)
SQLALCHEMY_DATABASE_URI=$(SQLALCHEMY_DATABASE_URI)
action: 'Build services'
Update:
For bash task, you can try usig env field to map the variables. See below:
- bash: |
sudo docker stack deploy ...
displayName: 'Bash Script'
enabled: false
env:
MAIL_USERNAME: $(MAIL_USERNAME)
MAIL_PASSWORD: $(MAIL_PASSWORD)
APP_ADMINS: $(APP_ADMINS)

Related

Unable to deploy to Azure Container Registry from GitLab

I have the following pipeline:
# .gitlab-ci.yml
stages:
- build
- push
build:
stage: build
services:
- docker:dind
image: docker:latest
script:
# Build the Docker image
- docker build -t myfe:$CI_COMMIT_SHA .
push:
stage: push
image: bitnami/azure-cli
script:
# - echo $DOCKERHUB_PASSWORD | docker login -u $DOCKERHUB_USERNAME --password-stdin
- echo $ACR_CLIENT_ID | docker login mycr.azurecr.io --username $ACR_CLIENT_ID --password-stdin
# Push the Docker image to the ACR
- docker push myfe:$CI_COMMIT_SHA
only:
- main
# before_script:
# - echo "$DOCKERHUB_PASSWORD" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
variables:
DOCKERHUB_USERNAME: $DOCKERHUB_USERNAME
DOCKERHUB_PASSWORD: $DOCKERHUB_PASSWORD
It results in the following error:
Using docker image sha256:373... for bitnami/azure-cli with digest bitnami/azure-cli#sha256:9128... ...
ERROR: 'sh' is misspelled or not recognized by the system.
Examples from AI knowledge base:
https://aka.ms/cli_ref
Read more about the command in reference docs
Any idea what this might mean?
The bitnami/azure-cli has an entrypoint of az, so your script is running as az parameters.
To solve this, you need to override the entrypoint using: entrypoint: [""] in your gitlab-ci.yml.
For more info check: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#override-the-entrypoint-of-an-image
If you want to user an Azure CLI image for this .gitlab-ci.yml file, you should use the official Microsoft image instead:
image: mcr.microsoft.com/azure-cli
Works like a charm!

How to combine docker creation and putting it in DEV Azure Container Registry?

Here is my scenario:
I create a docker image from an SQL dump with the following commands, executed from command prompt:
docker pull mariadb:10.4.26
docker run --name test_smdb -e MYSQL_ROOT_PASSWORD=<some_password> -p 3306:3306 -d mariadb:10.4.26
docker exec -it test_smdb mariadb --user root -p<some_password>
MariaDB [(none)]> CREATE DATABASE smdb_dev;
docker exec -i test_smdb mariadb -uroot -p<some_password> smdb_dev --force < C:\smdb-dev.sql
But my task now is create a pipeline, that creates this docker image and puts it into the Azure Container Registry
I found this link - Build and push Docker images to Azure Container Registry:
https://learn.microsoft.com/en-us/azure/devops/pipelines/ecosystems/containers/acr-template?view=azure-devops
And i see that the result should be a yaml file like this:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
But can someone show me how to combine the two things - the docker image creation and the putting it into the Azure Container Registry?
You would need to make a dockerfile and put this in the repository.
The commands you specified at the top of your question should be your input
It could look something like this (just threw something together. probably wont work as is):
# syntax=docker/dockerfile:1
FROM mariadb:10.4.26
WORKDIR /app
COPY . .
run --name test_smdb -e MYSQL_ROOT_PASSWORD=<some_password> -p 3306:3306 -d mariadb:10.4.26
run MariaDB [(none)]> CREATE DATABASE smdb_dev;
EXPOSE {mariadb port #}

Running a docker container pulled from Azure Container Registry in Azure DevOps pipeline

I would like to run a docker container, which i pull from Azure Container Registry. All of this i would like to do in Azure DevOps pipeline.
Firstly i created sample Node.js app and Dockerized it with this tutorial: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
Then i did my Azure Pipeline which firstly do build&push and then pull and run.
My pipeline:
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: Docker#2
displayName: Docker pull
inputs:
command: pull
containerRegistry: $(dockerRegistryServiceConnection)
arguments: container01.azurecr.io/devopsnodejs:latest
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: $(dockerRegistryServiceConnection)
- script: |
docker run -p 49160:8080 -d container01.azurecr.io/devopsnodejs:latest
The pipelines runs every step sucessfully, the last script with docker run prints this to Azure DevOps console
Generating script.
Script contents:
docker run -p 49160:8080 -d ***/devopsnodejs:latest
========================== Starting Command Output ===========================
/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/b117892d-e34c-484c-ad8c-f99cd0a97e18.sh
7c6c9d548c4be3e4568e56ffc87cca27e698fc53b5ec15a1595cd45fe72dd143
And now the problem is that, I cannot acces the app which should return simply get request saying 'Hello World'
Im trying to go to localhost:49160, to curl -i localhost:49160 but there is only curl: (7) Failed to connect to localhost port 49160 after 2258 ms: Connection refused
Also, if i do it locally, not in azure pipelines, so I simply run docker pull container01.azurecr.io/devopsnodejs:latest and docker run -p 49160:8080 -d container01.azurecr.io/devopsnodejs:latest in powershell, the docker ps will show me this container, as well as the curl -i localhost:49160 will work. Am I able to access this locally, or if i run it in Azure Pipelines it will work only there?
Have you seen this question already, which is I think your scenario?
Cannot conect to Docker container running in VSTS
You could use a bash script with ssh to logon to the target machine and perform docker commands on the machine. The following is just an example. I have something like this in one of my pipelines.
- task: Bash#3
displayName: "Deploy image $(imageName)"
inputs:
targetType: 'inline'
failOnStderr: false
script: |
sshpass -p '$(localroot-userpassword)' ssh -o StrictHostKeyChecking=no $(localroot-username)#$(remoteHost) "{ sudo docker stop $(imageName) ; sudo docker rm $(imageName) ; sudo docker pull $(imageName) ; sudo docker run ...#add other commands separated by semicolon}"

GitLab Container to GKE (Kubernetes) deployment

Hello I have a problem with GitLab CI/CD. I'm trying to deploy container to Kubernetes on GKE however I'm getting an error:
This job failed because the necessary resources were not successfully created.
I created a service account with kube-admin rights and created cluster via GUI of GitLab so its fully itegrated. But when I run the job it still doesn't work..
by the way I use kubectl get pods in gitlab-ci file just to test if kubernetes is repsonding.
stages:
- build
- deploy
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
deploy-prod:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl get pods
environment:
name: production
kubernetes:
namespace: test1
Any Ideas?
Thank you
namespace should be removed.
GitLab creates own namespace for every project

Deploy docker container using gitlab ci docker-in-docker setup

I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.

Resources