Run bash commands in docker-compose - linux

I am using default image and my requirement is to run a few Linux commands when I run docker-compose file. Os is redhat. This is my docker-compose file
version: '3.4'
services:
rstudio-package-manager:
image: 'rstudio/rstudio-package-manager:latest'
restart: always
volumes:
- '$PWD/rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
command: bash -c mkdir "/tmp/hello"
ports:
- '4242:4242'
This is the error:
rstudio-package-manager_1 | mkdir: missing operand
rstudio-package-manager_1 | Try 'mkdir --help' for more information.
Any help would be appreciated
EDIT
I have to run a few commands after the container starts. It can be added as a bash script too. For that, I tried this
version: '3.4'
services:
rstudio-package-manager:
privileged: true
image: 'rstudio/rstudio-package-manager:latest'
restart: always
environment:
- RSPM_LICENSE=1212323123123123
volumes:
- './rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
- './init.sh:/usr/local/bin/init.sh'
command:
- init.sh
ports:
- '4242:4242'
Inside init.sh is this
alias rspm='/opt/rstudio-pm/bin/rspm'
rspm create repo --name=prod-cran --description='Access CRAN packages'
rspm subscribe --repo=prod-cran --source=cran
And that also didn't work. Can anyone helpme out?

You are passing 3 arguments to bash:
-c
mkdir
/tmp/hello
but you need to pass only two:
-c
mkdir /tmp/hello
In other words: -c expects a single "word" to follow it. Anything after that is considered a positional parameter.
Therefore:
bash -c 'mkdir /tmp/hello'

Based on your edit, it doesn't sound like you want to change the command when running a container, but you want to create a derived image based on an existing one.
You want a Dockerfile which modifies an existing image and add your files/applies your modifications.
docker-compose.yml:
version: '3.4'
services:
rstudio-package-manager:
privileged: true
build:
context: folder_with_dockerfile
image: your_new_image_name
restart: always
environment:
- RSPM_LICENSE=1212323123123123
volumes:
- './rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
- './init.sh:/usr/local/bin/init.sh'
ports:
- '4242:4242'
folder_with_dockerfile/Dockerfile:
FROM rstudiol/rstudio-package-manager:latest
RUN alias rspm='/opt/rstudio-pm/bin/rspm' && \
rspm create repo --name=prod-cran --description='Access CRAN packages' && \
rspm subscribe --repo=prod-cran --source=cran
RUN commands in next layer
RUN commands in third layer
Then build the new image with docker-compose build and start normally. You could also run docker-compose up --build to build & run.

Related

copy files from docker container to host, or mirror a folder

I have this docker-compose:
version: "3.9"
services:
myserver:
image: <some image>
restart: always
volumes:
- ./res:/tmp/configs
- ./myfolder:/tmp/logs
Should I expect to see the files that are in /tmp/logs inside the container, in the host folder 'myfolder'?
'myfolder' is empty and I want it to be updated with the contents of the /tmp/logs folder in the container.
For formatting purposes, I post this as an answer instead of comment.
Can you put following in test.sh on the host?
#!/usr/bin/env bash
testdir=/tmp/docker-compose
rm -rf "$testdir"; mkdir -p "$testdir"/{myfolder,res}
cd "$testdir"
cat << EOF > docker-compose.yml
version: "3.9"
services:
myserver:
image: debian
restart: always
volumes:
- ./res:/tmp/configs
- ./myfolder:/tmp/logs
command: sleep inf
EOF
docker-compose up
and run bash test.sh on the host and see if it works. host dir is /tmp/docker-compose/myfolder.
What worked for me is using full path for the host path.
C:\myfolder for example.

I am trying to keep a container running using a docker compose file. I added tty: true and std_in: true but it still exits with code 0

Here is my docker compose file.
version: '3'
services:
web:
image: emarcs/nginx-git
ports:
- 8081:80
container_name: Avida
working_dir: /usr/share/nginx/html
command: bash -c "git clone https://github.com/raju/temp.git && echo "cloned successfully" && mv Avida-ED-Eco /usr/share/nginx/html && echo "Successfully moved the file""
volumes:
- srv:/srv/git
- logs:/var/log/nginx
environment:
GIT_POSTBUFFER: 1048576
stdin_open: true
tty: true
firefox:
image: jlesage/firefox
ports:
- 5800:5800
volumes:
- my-vol:/data/db
depends_on:
- web
volumes:
my-vol:
driver: local
srv:
driver : local
logs:
driver : local
What I am doing is I am using a docker nginx image with git installed on it and using that image to clone a directory and moving that directory to ngnix HTML path to read it. but after cloning the container exits and the code goes away. How can I keep container running without exiting with code 0. I tried some options such as tty: true and std_in: true nothing works.
container exit with code 0
So keep it running.
command:
- bash
- -ec
- |
git clone https://github.com/raju/temp.git
echo "cloned successfully"
mv -v Avida-ED-Eco /usr/share/nginx/html
echo "Successfully moved the file""
# keep it running
sleep infinity
But you would rather create a Dockerfile, in which you would prepare the image.
I changed the format, you can read about | in YAML documentation. I replaced && by set -e, you can read about it https://mywiki.wooledge.org/BashFAQ/105 .
If you want to do some modification and then start nginx, you should:
docker inspect the docker image and get the command it uses, or inspect the Dockerfile the image was built with
invoke the shell as you do and do the stuff that you want to do
and then call the command that it had previously calling
You're trying to set an individual container's command: to copy some code into the container, every time it starts up. It'd be better to write a Dockerfile to do this work, only once. You can start a Dockerfile FROM any image you want and do any setup work you need to do in the container. It will inherit the ENTRYPOINT and/or CMD from the base image if you don't override them.
# Dockerfile
FROM emarcs/nginx-git
RUN mkdir /x
&& git clone https://github.com/raju/temp.git /x
&& mv /x/temp/Avida-ED-Eco /usr/share/nginx/html
&& rm -rf /x
# (Can you `COPY ./ /usr/share/nginx/html/` instead?)
Then in your Compose file, specify to build: an image instead of using the Docker Hub image:; do not override command:; and do not overwrite parts of the image with named volumes.
version: '3.8'
services:
web:
build: .
ports:
- 8081:80
# this is all that's required

Docker compose can't find named Dockerfile

my project structure is defined like this (names are just for example):
- docker-compose.yml
- env_files
- foo.env
- dockerfiles
-service_1
-foo.Dockerfile
-requirements.txt
- code_folder_1
- ...
- code_folder_2
- ...
In my docker-compose:
some_service:
container_name: foo_name
build:
context: .
dockerfile: ./dockerfiles/service_1/foo.Dockerfile
ports:
- 80:80
env_file:
- ./env_files/foo.env
Dockerfile:
FROM python:3.8-slim
WORKDIR /some_work_dir
COPY ./dockerfiles/intermediary/requirements.txt .
RUN pip3 install --upgrade pip==21.3.1 && \
pip3 install -r requirements.txt
COPY . .
EXPOSE 80
And after i run docker compose build in the directory where compose file is located I get this error:
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4158855397/Dockerfile: no such file or directory
I really do not understand why this is happening. I need to set context:. because I have multiple folders that I need to COPY inside foo.Dockerfile
Same error was replicated in macOS Monterey 12.5.1 and Ubuntu 18.04.4 LTS (Bionic Beaver)
I solved a similar issue by writing a script like the following:
#!/bin/bash
SCRIPT_PATH=$(readlink -f $(dirname $0))
SCRIPT_PATH=${SCRIPT_PATH} docker-compose -f ${SCRIPT_PATH}/docker-compose.yaml up -d --build $#
And changing the yaml into:
some_service:
container_name: foo_name
build:
context: .
dockerfile: ${SCRIPT_PATH}/dockerfiles/service_1/foo.Dockerfile
ports:
- 80:80
env_file:
- ${SCRIPT_PATH}/env_files/foo.env

Passing variable from makefile to docker compose file?

In my docker-compose file, there is more than 3 service. I am passing two-variable from docker command with a makefile. But I'm facing a problem - after executing first command similar second command not executing.
See this example for better understanding-
The docker-compose file is -
version: '3.7'
services:
ping:
container_name: ping_svc
image: "${PING_IMAGE_NAME}${PING_IMAGE_TAG}"
ports:
- 8080:8080
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=ping
tty: true
id:
container_name: id_svc
image: "${ID_IMAGE_NAME}${ID_IMAGE_TAG}"
ports:
- 8081:8081
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=id
tty: true
And my makefile command is-
# setting ping_image
#PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" docker-compose up -d
# setting id_image
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" docker-compose up -d
The PING_IMAGE_NAME and PING_IMAGE_TAG settings were successfully but from the next line not executing. why?
Is there any better way to do this?
I solved this by putting all variables in one line.
Like this-
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" \
PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" \
docker-compose up -d ping id
Here ping and id is my container name.
Maybe the issue was every time I'm upping docker-compose.

How to pass parameters to program ran in Docker container?

I'm new to Docker and I want to use an Odoo Docker image.
I want to update module, normally I would restart the server and start it again with
odoo -u module_name
But I cannot find how to do this with Docker container.
In other words, I would like to start application inside container with specific parameters.
I would like to do something like this
docker start odoo -u module_name
How to do it?
you can use docker-compose and inside Odoo service add bellow command:
tty: true
command: -u target_modules_to_update
if you don't have a docker-compose file let me know and i will help.
#tjwnuk you can add the following code In file named docker-compose.yml:
version: '2'
services:
web:
image: odoo:version_number_here
depends_on:
- db
ports:
- "target_port_here:8069"
tty: true
command: -u module_name -d database_name or any config paramter you need to add
volumes:
- odoo:/var/lib/odoo
- ./config/odoo.config:/etc/odoo/odoo.conf
- ./addons-enterprise:/mnt/enterprise-addons
- ./custom_addons:/mnt/extra-addons
restart: always
db:
image: postgres:version_number_here
ports:
- "1432:5432"
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=db_user
- POSTGRES_PASSWORD=db_password
volumes:
- "odoo-dbs:/var/lib/postgresql/data/pgdata"
restart: always
volumes:
odoo:
odoo-dbs:
And you should have those dirs:
1- config Inside this dir you should have file name odoo.config you can add any configuration here also
2- addons-enterprise <== this for enterprise addons If you have
3- custom_addons <== this for your custom addons
finally, If you don't Install docker-compose you can run this command below:
apt install docker-compose
I hope this answer helps you and anyone who has the same case.
Either switch into the container an start another Odoo instance:
docker exec -ti <odoo_container> bash
odoo -c <path_to_config> -u my_module -d my_database --max-cron-threads=0 --stop-after-init --no-xmlrpc
or switch into the database container and set the module to state to upgrade:
docker exec -ti <database_container> psql -U <odoo_db_user> <db_name>
UPDATE ir_module_module set state = 'to upgrade' where name = 'my_module';
Using the database way requires a restart of odoo.

Resources