Passing variable from makefile to docker compose file? - linux

In my docker-compose file, there is more than 3 service. I am passing two-variable from docker command with a makefile. But I'm facing a problem - after executing first command similar second command not executing.
See this example for better understanding-
The docker-compose file is -
version: '3.7'
services:
ping:
container_name: ping_svc
image: "${PING_IMAGE_NAME}${PING_IMAGE_TAG}"
ports:
- 8080:8080
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=ping
tty: true
id:
container_name: id_svc
image: "${ID_IMAGE_NAME}${ID_IMAGE_TAG}"
ports:
- 8081:8081
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=id
tty: true
And my makefile command is-
# setting ping_image
#PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" docker-compose up -d
# setting id_image
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" docker-compose up -d
The PING_IMAGE_NAME and PING_IMAGE_TAG settings were successfully but from the next line not executing. why?
Is there any better way to do this?

I solved this by putting all variables in one line.
Like this-
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" \
PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" \
docker-compose up -d ping id
Here ping and id is my container name.
Maybe the issue was every time I'm upping docker-compose.

Related

Dockerfile and Docker Compose for NestJS app with PSQL DB where env vars are expected at runtime

I'm Dockerizing a simple Node/JS (NestJS -- but I don't think that matters for this question) web service and have some questions. This service talks to a Postgres DB. I would like to write a Dockerfile that can be used to build an image of the service (let's call it my-service) and then write a docker-compose.yml that defines a service for the Postgres DB as well as a service for my-service that uses it. That way I can build images of my-service but also have a Docker Compose config for running the service and its DB at the same time together. I think that's the way to do this (keep me honest though!). Kubernetes is not an option for me, just FYI.
The web service has a top-level directory structure like so:
my-service/
.env
package.json
package-lock.json
src/
<lots of other stuff>
Its critical to note that in its present, non-containerized form, you have to set several environment variables ahead of time, including the Postgres DB connection info (host, port, database name, username, password, etc.). The application code fetches the values of these env vars at runtime and uses them to connect to Postgres.
So, I need a way to write a Dockerfile and docker-compose.yml such that:
if I'm just running a container of the my-service image by itself, and want to tell it to connect to any arbitrary Postgres DB, I can pass those env vars in as (ideally) runtime arguments on the Docker CLI command (however remember the app expects them to be set as env vars); and
if I'm spinning up the my-service and its Postgres together via the Docker Compose file, I need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments, and then the container needs to set them as env vars for web service to use
Again, I think this is the correct way to go, but keep me honest!
So my best attempt -- a total WIP so far -- looks like this:
Dockerfile
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
# creates "dist" to run out of
RUN npm run build
# ideally the env vars are already set at this point via
## docker CLI arguments, so nothing to pass in here (???)
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ??? anyway to say "build whats in the repo?"
environment:
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
volumes:
pgdata:
Can anyone help nudge me over the finish line here? Thanks in advance!
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
Yes, you should pass the variables there. This is a principle of 12 factor design
need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments
If you don't put them directly in the YAML, will this option work for you?
docker-compose --env-file app.env up
Ideally, you also put
depends_on:
postgres
So that when you start your service, the database will also start up.
If you want to connect to a different database instance, then you can either create a separate compose file without that database, or use a different set of variables (written out, or using env_file, as mentioned)
Or you can use NPM dotenv or config packages and set different .env files for different database environments, based on other variables, such as NODE_ENV, at runtime.
??? anyway to say "build whats in the repo?"
Use build instead of image directive.
Kubernetes is not an option for me, just FYI
You could use Minikube instead of Compose... Doesn't really matter, but kompose exists to convert a Docker Compose into k8s resources.
Your Dockerfile is correct. You can specify the environment variables while doing docker run like this:
docker run --name my-service -it <image> -e PG_USER='user' -e PG_PASSWORD='pass'
-e PG_HOST='dbhost' -e PG_DATABASE='dbname' --expose <port>
Or you can specify the environment variables with the help of .env file. Let's call it app.env. Its content would be:
PG_USER=user
PG_PASSWORD=pass
PG_DATABASE=dbname
PG_HOST=dbhost
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval
Now instead of specifying multiple -e options to docker run command, you can simply tell the name of the file from where the environment variables need to be picked up.
docker run --name my-service -it <image> --env-file app.env --expose <port>
In order to run postgres and your service with a single docker compose command, a few modifications need to be done in your docker-compose.yml. Let's first see the full YAML.
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: $PG_PASSWORD
POSTGRES_USER: $PG_USER
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
build: . #instead of image directive, use build to tell docker what folder to build
environment:
PG_USER: $PG_USER
PG_PASSWORD: $PG_PASSWORD
PG_HOST: postgres #note the name of the postgres service in compose yaml
PG_DATABASE: my-service-db
OTHER_ENV_VAR1: $OTHER_ENV_VAR1
OTHER_ENV_VAR2: $OTHER_ENV_VAR2
depends_on:
postgres
volumes:
pgdata:
Now you can use docker compose up command to run the services. If you wish to build the my-service container each time you can pass an optional argument --build like this: docker compose up --build.
In order to pass the environment variables from the CLI, there's only one way which is by the use of .env file. In your case of docker-compose.yml the app.env would look like:
PG_USER=user
PG_PASSWORD=pass
#PG_DATABASE=dbname #not required as you're using 'my-service-db' as db name in compose file
#PG_HOST=dbhost #not required as service name of postgres in compose file is being used as db host
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval
Passing this app.env file using docker compose CLI command would look like this:
docker compose --env-file app.env up --build
PS: If you're building your my-service each time just for the code changes to reflect in the docker container, you could make use of bind mount instead. The updated docker-compose.yml in that case would look like this:
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: $PG_PASSWORD
POSTGRES_USER: $PG_USER
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
build: .
volumes:
- .:/usr/src/app #note the use of volumes here
environment:
PG_USER: $PG_USER
PG_PASSWORD: $PG_PASSWORD
PG_HOST: postgres
PG_DATABASE: my-service-db
OTHER_ENV_VAR1: $OTHER_ENV_VAR1
OTHER_ENV_VAR2: $OTHER_ENV_VAR2
depends_on:
postgres
volumes:
pgdata:
This way, you don't need to run docker compose build each time, making a code change in the source folder would get reflected in the docker container.
You just need to add path of your docker file in to build parameter in docker-compose.yaml file and all the environment variables in environment
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: path_to_your_dockerfile
environment:
your_environment_variables_here
container as env vars?
volumes:
pgdata
I am guessing that you have folder structure like this
project_folder/
docker-compose.yaml
my-service/
Dockerfile
.env
package.json
package-lock.json
src/
<lots of other stuff>
and your .env contains following
API_PORT=8082
Environment_var1=Environment_var1_value
Environment_var2=Environment_var2_value
So in your case your docker-compose file should look like this
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ./my-service/
environment:
- API_PORT=8082
- Environment_var1=Environment_var1_value
- Environment_var2=Environment_var2_value
volumes:
pgdata
FYI: for this docker configuration your database connection host should be postgres (as per service name) not localhost and your

How to pass parameters to program ran in Docker container?

I'm new to Docker and I want to use an Odoo Docker image.
I want to update module, normally I would restart the server and start it again with
odoo -u module_name
But I cannot find how to do this with Docker container.
In other words, I would like to start application inside container with specific parameters.
I would like to do something like this
docker start odoo -u module_name
How to do it?
you can use docker-compose and inside Odoo service add bellow command:
tty: true
command: -u target_modules_to_update
if you don't have a docker-compose file let me know and i will help.
#tjwnuk you can add the following code In file named docker-compose.yml:
version: '2'
services:
web:
image: odoo:version_number_here
depends_on:
- db
ports:
- "target_port_here:8069"
tty: true
command: -u module_name -d database_name or any config paramter you need to add
volumes:
- odoo:/var/lib/odoo
- ./config/odoo.config:/etc/odoo/odoo.conf
- ./addons-enterprise:/mnt/enterprise-addons
- ./custom_addons:/mnt/extra-addons
restart: always
db:
image: postgres:version_number_here
ports:
- "1432:5432"
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=db_user
- POSTGRES_PASSWORD=db_password
volumes:
- "odoo-dbs:/var/lib/postgresql/data/pgdata"
restart: always
volumes:
odoo:
odoo-dbs:
And you should have those dirs:
1- config Inside this dir you should have file name odoo.config you can add any configuration here also
2- addons-enterprise <== this for enterprise addons If you have
3- custom_addons <== this for your custom addons
finally, If you don't Install docker-compose you can run this command below:
apt install docker-compose
I hope this answer helps you and anyone who has the same case.
Either switch into the container an start another Odoo instance:
docker exec -ti <odoo_container> bash
odoo -c <path_to_config> -u my_module -d my_database --max-cron-threads=0 --stop-after-init --no-xmlrpc
or switch into the database container and set the module to state to upgrade:
docker exec -ti <database_container> psql -U <odoo_db_user> <db_name>
UPDATE ir_module_module set state = 'to upgrade' where name = 'my_module';
Using the database way requires a restart of odoo.

Run bash commands in docker-compose

I am using default image and my requirement is to run a few Linux commands when I run docker-compose file. Os is redhat. This is my docker-compose file
version: '3.4'
services:
rstudio-package-manager:
image: 'rstudio/rstudio-package-manager:latest'
restart: always
volumes:
- '$PWD/rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
command: bash -c mkdir "/tmp/hello"
ports:
- '4242:4242'
This is the error:
rstudio-package-manager_1 | mkdir: missing operand
rstudio-package-manager_1 | Try 'mkdir --help' for more information.
Any help would be appreciated
EDIT
I have to run a few commands after the container starts. It can be added as a bash script too. For that, I tried this
version: '3.4'
services:
rstudio-package-manager:
privileged: true
image: 'rstudio/rstudio-package-manager:latest'
restart: always
environment:
- RSPM_LICENSE=1212323123123123
volumes:
- './rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
- './init.sh:/usr/local/bin/init.sh'
command:
- init.sh
ports:
- '4242:4242'
Inside init.sh is this
alias rspm='/opt/rstudio-pm/bin/rspm'
rspm create repo --name=prod-cran --description='Access CRAN packages'
rspm subscribe --repo=prod-cran --source=cran
And that also didn't work. Can anyone helpme out?
You are passing 3 arguments to bash:
-c
mkdir
/tmp/hello
but you need to pass only two:
-c
mkdir /tmp/hello
In other words: -c expects a single "word" to follow it. Anything after that is considered a positional parameter.
Therefore:
bash -c 'mkdir /tmp/hello'
Based on your edit, it doesn't sound like you want to change the command when running a container, but you want to create a derived image based on an existing one.
You want a Dockerfile which modifies an existing image and add your files/applies your modifications.
docker-compose.yml:
version: '3.4'
services:
rstudio-package-manager:
privileged: true
build:
context: folder_with_dockerfile
image: your_new_image_name
restart: always
environment:
- RSPM_LICENSE=1212323123123123
volumes:
- './rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
- './init.sh:/usr/local/bin/init.sh'
ports:
- '4242:4242'
folder_with_dockerfile/Dockerfile:
FROM rstudiol/rstudio-package-manager:latest
RUN alias rspm='/opt/rstudio-pm/bin/rspm' && \
rspm create repo --name=prod-cran --description='Access CRAN packages' && \
rspm subscribe --repo=prod-cran --source=cran
RUN commands in next layer
RUN commands in third layer
Then build the new image with docker-compose build and start normally. You could also run docker-compose up --build to build & run.

How do I get my app in a docker instance to talk to my database in another docker instance inside the same network?

Ok I give up! I spent far too much time on this:
So I want my app inside a docker container to talk to my postgres which is inside another container.
Docker-compose.yml
version: "3.8"
services:
foodbudget-db:
container_name: foodbudget-db
image: postgres:12.4
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: foodbudget
PGDATA: /var/lib/postgresql/data
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- 5433:5432
web:
image: node:14.10.1
env_file:
- .env
depends_on:
- foodbudget-db
ports:
- 8080:8080
build:
context: .
dockerfile: Dockerfile
Dockerfile
FROM node:14.10.1
WORKDIR /src/app
ADD https://github.com/palfrey/wait-for-db/releases/download/v1.0.0/wait-for-db-linux-x86 /src/app/wait-for-db
RUN chmod +x /src/app/wait-for-db
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
EXPOSE 8080
But I keep getting this error when I build the Dockerfile, even though the database is up and running when I run docker ps. I tried connecting to the postgres database in my host machine, and it connected successfully...
Temporary error (pausing for 3 seconds): PostgresError { error: Error(Io(Custom { kind: Other, error: "failed to lookup address information: Name does not resolve" })) }
Have anyone created an app and talk to a db in another docker instance be4?
This line is the issue:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
You must use the internal port of the docker container (5432) instead of the exposed one within a network:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5432 -t 1000000

How to change default user 'flink' on 'root' in docker container?

I run flink as docker container from docker-compose file. Here is a part of it:
jobmanager:
image: flink:1.7.2-scala_2.11-alpine
restart: always
volumes:
- type: bind
source: ./app-folders/data__unzip
target: /data_unzip
expose:
- "6123"
ports:
- "8081:8081"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
networks:
- dwh-network
When i try to add in my compose file
user : root
It doesn't work, and when flink starts i see in logs:
- OS current user: flink
So, I see it somehow integrated, mb when it was builded...but is there a way to change it on 'root'?
I found an answer - you need to replace docker-entrypoint.sh with your own file by adding volume from your host-machine and correct lines in it from "gosu flink... / su-exec flink..." to "gosu root .../ su-exec root..."

Resources