I am working with 2 docker container, container 1: when running it asks user for input for example “what’s your name”, store it. Container 2: takes the user input from container 1 and echo’s “hello
test.sh:
#!/bin/bash
echo “What’s your name?”
read -p “Enter your name: “ username
echo “Hello $username!”
Dockerfile:
FROM ubuntu
COPY test.sh ./
ENTRYPOINT [ “./test.sh”]
You need to connect the two containers together, using the docker network connect command. More information on the following webpage,
https://docs.docker.com/engine/reference/commandline/network_connect/
Related
In Azure pipeline i pull and start a docker image of a maria DB:
- bash: |
docker pull <some_azure_repository>/databasedump:8878
echo "docker image pulled"
docker run -d --publish 3306:3306 <some_azure_repository>/databasedump:8878
I would like to make sure that the docker image is successfully started, before continuing with the next steps.
That is why i add this step:
- powershell: |
if (!(mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all')) {
еcho "Will have some sleep" //Should be "Start-Sleep -Seconds 15"
}
But when this is executed in an Azure pipeline, the pipeline get stuck into this execution:
if (!(mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all'))
The dump
еcho "Will have some sleep"
is never reached. Even if i change the condition from negative to positive:
if ((mysql -h localhost -P 3306 -u root -p $(SMDB_ROOT_PASSWORD) -e 'use smdb_all'))
the result is the same !?!?!
So several questions:
1. How to check correctly whether the mariadb docker container is running?
2. Why the execution is stuck into this line?
3. How to do this with a while loop (if the check is not successful to wait for 15 seconds, then to try another check, and so on....)
We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.
completely new to the shell script. I want to run the sql image (image is just there to take a db dump) and take a dump of the db and copy file to the host using shell script.
how i do manually is
1) docker run -it <image_name> bash (this takes in image bash)
2) mysqldump -h <ip> -u <user> -p db > filename.sql
3) docker cp <containerId>:/file/path/within/container /host/path/target (running this in host machine)
doing this i get the dump from container to host manually.
but while making shell script, i am having problem with the point 1) docker run -it <image_name> bash (this takes in image bash) since this takes me to the bash and i have to manually type the command.
how can i do it in the shell script.
any help will be greatly appreciated!
If I understand this correctly, you don't want to type those command manually and instead shell script should execute your command as and when you container is up and running. Now if you can modify sql related Dockerfile and can re-create image then use ENTRYPOINT [and if needed CMD] to execute shell script at startup. Check this link for details on ENTRYPOINT shell script.
Else, if you cannot recreate image then check this post i.e. how to run bash script from run command.
NOTE in both these cases you will have to mount your directory/volume and your sqldump command should copy dump this map volume/directory
You can pass the command to Bash as a parameter:
docker run -it <image_name> --name sqldump bash -c "mysqldump -h <ip> -u <user> -p db > /tmp/filename.sql"
docker cp sqldump:/tmp/filename.sql /path/on/host/filename.sql
Ignore the Docker steps, and just run mysqldump on your host. The -h option is the IP address or DNS name of the host running the database (can be 127.0.0.1 if the container is running on the same host, but not localhost because MySQL misinterprets that); if you mapped the database external port to a non-default port, you also need a -P (capital P) option to specify that port.
For example, if you started the container with
docker run -p 5433:5432 ... mysql:8
then you can take the dump from the host with
mysqldump -h 127.0.0.1 -P 5433 -p db > dump.sql
and not worry about the Docker details at all.
I am trying to get a shell inside the Docker container moul/phoronix-test-suite on Docker Hub using this command
docker run -t -i moul/phoronix-test-suite /bin/bash
but just after executing the command (binary file), the container stops and I get no shell into it.
[slazer#localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0993189463e6 moul/phoronix-test-suite "phoronix-test-suite " 7 seconds ago Exited (0) 3 seconds ago kickass_shockley
It is a ubuntu:trusty container. How can I get a shell into it, so that I can send arguments to the command phoronix-test-suite?
docker run -t -i moul/phoronix-test-suite /bin/bash will not give you a bash (contrary to docker run -it fedora bash)
According to its Dockerfile, what it will do is execute
phoronix-test-suite /bin/bash
Meaning, it will pass /bin/bash as parameter to phoronix-test-suite, which will exit immediately. That leaves you no time to execute a docker exec -it <container> bash in order to open a bash in an active container session.
Have you tried restarting your docker? It might need to restart or even reboot the host.
I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.