In Azure pipeline i pull and start a docker image of a maria DB:
- bash: |
docker pull <some_azure_repository>/databasedump:8878
echo "docker image pulled"
docker run -d --publish 3306:3306 <some_azure_repository>/databasedump:8878
I would like to make sure that the docker image is successfully started, before continuing with the next steps.
That is why i add this step:
- powershell: |
if (!(mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all')) {
еcho "Will have some sleep" //Should be "Start-Sleep -Seconds 15"
}
But when this is executed in an Azure pipeline, the pipeline get stuck into this execution:
if (!(mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all'))
The dump
еcho "Will have some sleep"
is never reached. Even if i change the condition from negative to positive:
if ((mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all'))
the result is the same !?!?!
So several questions:
1. How to check correctly whether the mariadb docker container is running?
2. Why the execution is stuck into this line?
3. How to do this with a while loop (if the check is not successful to wait for 15 seconds, then to try another check, and so on....)
Related
i'm trying to see if i can run commands after "entering" into a container:
#!/bin/sh
# 1 - create a new user in the db for mautic
podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY /'$1/';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
exit
EOF
this gives me an error: Error: container create failed (no logs from conmon): EOF
but im thinking maybe this is not a good use of HERE DOCS
something like this doesn't work either:
echo $1 | podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p postal-server-1 -e 'select * from deliveries limit 10;'
That's a fine (and common) use of here docs, although you probably want to drop the -t from your podman command line. If I have a mariadb container running:
podman run -d --name mariadb -e MARIADB_ROOT_PASSWORD=secret docker.io/mariadb:10
Then if I put your shell script into a file named createdb.sh, modified to look like this for my environment:
podman exec -i mariadb mysql -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY '$1';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
EOF
I've made three changes:
I've removed the -t from the podman exec command line, since we're passing input on stdin rather than starting an interactive terminal;
I removed the unnecessary exit command (the interactive mysql shell will exit when it reaches end-of-file);
I removed the weird forward slashes around your quotes (/'$1/' -> '$1').
I can run it like this:
sh createdb.sh secret
And it runs without errors. The database exists:
$ podman exec mariadb mysql -u root -psecret -e 'show databases'
Database
information_schema
mauticdb <--- THERE IT IS
mysql
performance_schema
sys
And the user exists:
$ podman exec mariadb mysql -u root -psecret mysql -e 'select user from user where user="mautic"'
User
mautic
I'm experimenting a weird behaviour of Docker in a bash script.
Let's see these two examples:
logs-are-showed() {
docker rm -f mybash &>/dev/null
docker run -it --rm -d --name mybash bash -c "echo hello; tail -f /dev/null"
docker logs mybash
}
# usage:
# $ localtunnel 8080
localtunnel() {
docker rm -f localtunnel &>/dev/null
docker run -it -d --network host --name localtunnel efrecon/localtunnel --port $1
docker logs localtunnel
}
In the first function logs-are-showed the command docker logs returns me the logs of the container mybash
In the second function localtunnel the command docker logs doesn't return me anything.
After having called the localtunnel function, if I ask for the container logs from outside the script, it shows me the logs correctly.
Why does this happen?
Processes take time to react. They may be no logs right after starting a process - it has not written anything yet. Wait a bit.
We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.
I am working with 2 docker container, container 1: when running it asks user for input for example “what’s your name”, store it. Container 2: takes the user input from container 1 and echo’s “hello
test.sh:
#!/bin/bash
echo “What’s your name?”
read -p “Enter your name: “ username
echo “Hello $username!”
Dockerfile:
FROM ubuntu
COPY test.sh ./
ENTRYPOINT [ “./test.sh”]
You need to connect the two containers together, using the docker network connect command. More information on the following webpage,
https://docs.docker.com/engine/reference/commandline/network_connect/
I have written a script(test.sh file) to reset mysql and postgres db in docker on System A
So when I run test.sh file on System A it works fine
Now I need to run the same file from another System B
For this i have to first connect to system A by giving this commands in console
Navigate to folder
enter the system A id test#192.111.1.111
enter password
then run the test.sh file from system B
How can I add all above 3 steps in test.sh file so that I dont have to enter the above 3 steps in console on System B so that I can just run the test.sh file on System B and it will do all the work of connecting tp System A and reset db
echo "Resetting postgres Database";
docker cp /home/test/Desktop/db_dump.sql db_1:/home
docker exec -it workflow bash -c "npm run schema:drop"
docker exec -it workflow bash -c "npm run cli schema:sync"
docker exec -it db_1 bash -c "PGPASSWORD=test psql -h db -U postgres -d test_db < /home/db_dump.sql"
echo "ProcessEngine Database Resetting";
docker cp /home/test/test/test/test.sql test:/home
docker exec -it test bash -c "mysql -uroot -ptest -e 'drop database test;'"
docker exec -it test bash -c "mysql -uroot -ptest -e 'create database test;'"
docker exec -it test bash -c "mysql -uroot -ptest -e 'use test; source /home/test.sql;'"
I want to add the connection code of ssh to this code so that i can run it from other system
navigate to folder
ssh test#192.111.1.111
password
how to put this above 3 steps in my code