Add Postgres database in Dockerfile - linux

I have a problem to solve well I have a Dockerfile with a tomcat image, I would still like to add a database there with postgres only I have a problem how to put it all together because I have no idea how to create a database in the dockerfile. Here is my Docker:
FROM tomcat:9.0.65-jdk11
RUN apt update
RUN apt install vim -y
RUN apt-get install postgres12
COPY ROOT.war
WORKDIR /usr/local/tomcat
CMD ["catalina.sh", "run"]
There is more but it is the usual mkdir and COPY of files. Do you have any idea ? maybe write a bash script that runs inside when building a container and creates my database ? I know, some people will write me that I should make ubuntu image install tomcat and postgres there, but I want to simplify my work in assigning permissions and shorten my work.

Use docker-compose and create a database separately:
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
POSTGRES_USER: user
#volumes: # if you want to create your db
# - ./init.sql:/docker-entrypoint-initdb.d/init.sql
# ports: # if you want to access it from your host
# - 5432:5432
tomcatapp:
build: .
image: tomcatapp
restart: always
ports:
- 8080:8080
depends_on:
- db
Notes: the Dockerfile and init.sql (if you want to init your db) and docker-compose.yml should be in the same folder.

Related

How do I run a PostgreSQL container using docker-compose with startup files that depends on .env values, then running node-pg-migrate?

I want the following flow.
PostgreSQL runs startup script with the values from .env file
PostgreSQL runs successfully
App runs successfully
Run migration commands. I have already created the scripts for migration commands. This is done using node-pg-migrate module, and I simply need to run npm run migrate up after the app runs successfully.
I have the following startup scripts for my PostgreSQL.
First file (./initdb.d/001_create_database.sql)
CREATE DATABASE ${PGDATABASE};
Second file (./initdb.d/002_create_user.sql)
CREATE USER ${PGUSER} WITH ENCRYPTED PASSWORD '${PGPASSWORD}';
GRANT ALL PRIVILEGES ON DATABASE ${PGDATABASE} TO ${PGUSER};
GRANT USAGE, CREATE ON SCHEMA public to ${PGUSER};
I have the following .env file.
#db config
PGUSER={something}
PGHOST={something}
PGPASSWORD={something}
PGDATABASE={something}
PGPORT={something}
I have the following Dockerfile for my Node.js app.
FROM node:14
RUN apt-get update && apt-get install -y gettext-base
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN git clone https://github.com/vishnubob/wait-for-it.git
EXPOSE 5000
CMD [ "npm", "run", "start-dev" ]
Then, I have the following docker-compose.yml file.
version: "3.7"
services:
app:
build: .
ports:
- "${PORT}:${PORT}"
env_file:
- .env
depends_on:
- postgres
- redis
- rabbitmq
restart: on-failure
postgres:
image: postgres:13
env_file:
- .env
ports:
- "${PGPORT}:${PGPORT}"
volumes:
- "./initdb.d:/docker-entrypoint-initdb.d"
- musicapidata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: root
POSTGRES_DB: root
POSTGRES_USER: root
redis:
...
rabbitmq:
...
migration:
build:
context: .
command:
[
"./wait-for-it/wait-for-it.sh",
"postgres:5432",
"--",
"npm",
"run",
"migrate",
"up"
]
links:
- postgres
depends_on:
- postgres
env_file:
- .env
volumes:
musicapidata:
I have tried using envsubst in my Dockerfile, but it doesn't work as the PostgreSQL service will copy the SQL script files with the placeholder values. Could someone guide me on how to do this?
EDIT
I have tried the following as well. I think I'm close.
docker-compose.yml
postgres:
build:
context: .
args:
- pgversion=13
dockerfile: Dockerfile-postgres
env_file:
- .env
ports:
- "${PGPORT}:${PGPORT}"
volumes:
- "./initdb.d:/docker-entrypoint-initdb.d"
- musicapidata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: root
POSTGRES_DB: root
POSTGRES_USER: root
Dockerfile-postgres
ARG pgversion
FROM postgres:$pgversion
RUN apt-get update && apt-get install -y gettext-base
COPY initdb.d /docker-entrypoint-initdb.d
RUN chmod +x /docker-entrypoint-initdb.d/*
WORKDIR /docker-entrypoint-initdb.d
COPY .env ./
RUN envsubst < ./001_create_database.sql
RUN envsubst < ./002_create_user.sql
Unfortunately, it is copying the .env just right, but it si not running the envsubst yet.

Dockerfile and Docker Compose for NestJS app with PSQL DB where env vars are expected at runtime

I'm Dockerizing a simple Node/JS (NestJS -- but I don't think that matters for this question) web service and have some questions. This service talks to a Postgres DB. I would like to write a Dockerfile that can be used to build an image of the service (let's call it my-service) and then write a docker-compose.yml that defines a service for the Postgres DB as well as a service for my-service that uses it. That way I can build images of my-service but also have a Docker Compose config for running the service and its DB at the same time together. I think that's the way to do this (keep me honest though!). Kubernetes is not an option for me, just FYI.
The web service has a top-level directory structure like so:
my-service/
.env
package.json
package-lock.json
src/
<lots of other stuff>
Its critical to note that in its present, non-containerized form, you have to set several environment variables ahead of time, including the Postgres DB connection info (host, port, database name, username, password, etc.). The application code fetches the values of these env vars at runtime and uses them to connect to Postgres.
So, I need a way to write a Dockerfile and docker-compose.yml such that:
if I'm just running a container of the my-service image by itself, and want to tell it to connect to any arbitrary Postgres DB, I can pass those env vars in as (ideally) runtime arguments on the Docker CLI command (however remember the app expects them to be set as env vars); and
if I'm spinning up the my-service and its Postgres together via the Docker Compose file, I need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments, and then the container needs to set them as env vars for web service to use
Again, I think this is the correct way to go, but keep me honest!
So my best attempt -- a total WIP so far -- looks like this:
Dockerfile
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
# creates "dist" to run out of
RUN npm run build
# ideally the env vars are already set at this point via
## docker CLI arguments, so nothing to pass in here (???)
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ??? anyway to say "build whats in the repo?"
environment:
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
volumes:
pgdata:
Can anyone help nudge me over the finish line here? Thanks in advance!
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
Yes, you should pass the variables there. This is a principle of 12 factor design
need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments
If you don't put them directly in the YAML, will this option work for you?
docker-compose --env-file app.env up
Ideally, you also put
depends_on:
postgres
So that when you start your service, the database will also start up.
If you want to connect to a different database instance, then you can either create a separate compose file without that database, or use a different set of variables (written out, or using env_file, as mentioned)
Or you can use NPM dotenv or config packages and set different .env files for different database environments, based on other variables, such as NODE_ENV, at runtime.
??? anyway to say "build whats in the repo?"
Use build instead of image directive.
Kubernetes is not an option for me, just FYI
You could use Minikube instead of Compose... Doesn't really matter, but kompose exists to convert a Docker Compose into k8s resources.
Your Dockerfile is correct. You can specify the environment variables while doing docker run like this:
docker run --name my-service -it <image> -e PG_USER='user' -e PG_PASSWORD='pass'
-e PG_HOST='dbhost' -e PG_DATABASE='dbname' --expose <port>
Or you can specify the environment variables with the help of .env file. Let's call it app.env. Its content would be:
PG_USER=user
PG_PASSWORD=pass
PG_DATABASE=dbname
PG_HOST=dbhost
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval
Now instead of specifying multiple -e options to docker run command, you can simply tell the name of the file from where the environment variables need to be picked up.
docker run --name my-service -it <image> --env-file app.env --expose <port>
In order to run postgres and your service with a single docker compose command, a few modifications need to be done in your docker-compose.yml. Let's first see the full YAML.
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: $PG_PASSWORD
POSTGRES_USER: $PG_USER
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
build: . #instead of image directive, use build to tell docker what folder to build
environment:
PG_USER: $PG_USER
PG_PASSWORD: $PG_PASSWORD
PG_HOST: postgres #note the name of the postgres service in compose yaml
PG_DATABASE: my-service-db
OTHER_ENV_VAR1: $OTHER_ENV_VAR1
OTHER_ENV_VAR2: $OTHER_ENV_VAR2
depends_on:
postgres
volumes:
pgdata:
Now you can use docker compose up command to run the services. If you wish to build the my-service container each time you can pass an optional argument --build like this: docker compose up --build.
In order to pass the environment variables from the CLI, there's only one way which is by the use of .env file. In your case of docker-compose.yml the app.env would look like:
PG_USER=user
PG_PASSWORD=pass
#PG_DATABASE=dbname #not required as you're using 'my-service-db' as db name in compose file
#PG_HOST=dbhost #not required as service name of postgres in compose file is being used as db host
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval
Passing this app.env file using docker compose CLI command would look like this:
docker compose --env-file app.env up --build
PS: If you're building your my-service each time just for the code changes to reflect in the docker container, you could make use of bind mount instead. The updated docker-compose.yml in that case would look like this:
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: $PG_PASSWORD
POSTGRES_USER: $PG_USER
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
build: .
volumes:
- .:/usr/src/app #note the use of volumes here
environment:
PG_USER: $PG_USER
PG_PASSWORD: $PG_PASSWORD
PG_HOST: postgres
PG_DATABASE: my-service-db
OTHER_ENV_VAR1: $OTHER_ENV_VAR1
OTHER_ENV_VAR2: $OTHER_ENV_VAR2
depends_on:
postgres
volumes:
pgdata:
This way, you don't need to run docker compose build each time, making a code change in the source folder would get reflected in the docker container.
You just need to add path of your docker file in to build parameter in docker-compose.yaml file and all the environment variables in environment
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: path_to_your_dockerfile
environment:
your_environment_variables_here
container as env vars?
volumes:
pgdata
I am guessing that you have folder structure like this
project_folder/
docker-compose.yaml
my-service/
Dockerfile
.env
package.json
package-lock.json
src/
<lots of other stuff>
and your .env contains following
API_PORT=8082
Environment_var1=Environment_var1_value
Environment_var2=Environment_var2_value
So in your case your docker-compose file should look like this
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ./my-service/
environment:
- API_PORT=8082
- Environment_var1=Environment_var1_value
- Environment_var2=Environment_var2_value
volumes:
pgdata
FYI: for this docker configuration your database connection host should be postgres (as per service name) not localhost and your

How to pass parameters to program ran in Docker container?

I'm new to Docker and I want to use an Odoo Docker image.
I want to update module, normally I would restart the server and start it again with
odoo -u module_name
But I cannot find how to do this with Docker container.
In other words, I would like to start application inside container with specific parameters.
I would like to do something like this
docker start odoo -u module_name
How to do it?
you can use docker-compose and inside Odoo service add bellow command:
tty: true
command: -u target_modules_to_update
if you don't have a docker-compose file let me know and i will help.
#tjwnuk you can add the following code In file named docker-compose.yml:
version: '2'
services:
web:
image: odoo:version_number_here
depends_on:
- db
ports:
- "target_port_here:8069"
tty: true
command: -u module_name -d database_name or any config paramter you need to add
volumes:
- odoo:/var/lib/odoo
- ./config/odoo.config:/etc/odoo/odoo.conf
- ./addons-enterprise:/mnt/enterprise-addons
- ./custom_addons:/mnt/extra-addons
restart: always
db:
image: postgres:version_number_here
ports:
- "1432:5432"
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=db_user
- POSTGRES_PASSWORD=db_password
volumes:
- "odoo-dbs:/var/lib/postgresql/data/pgdata"
restart: always
volumes:
odoo:
odoo-dbs:
And you should have those dirs:
1- config Inside this dir you should have file name odoo.config you can add any configuration here also
2- addons-enterprise <== this for enterprise addons If you have
3- custom_addons <== this for your custom addons
finally, If you don't Install docker-compose you can run this command below:
apt install docker-compose
I hope this answer helps you and anyone who has the same case.
Either switch into the container an start another Odoo instance:
docker exec -ti <odoo_container> bash
odoo -c <path_to_config> -u my_module -d my_database --max-cron-threads=0 --stop-after-init --no-xmlrpc
or switch into the database container and set the module to state to upgrade:
docker exec -ti <database_container> psql -U <odoo_db_user> <db_name>
UPDATE ir_module_module set state = 'to upgrade' where name = 'my_module';
Using the database way requires a restart of odoo.

docker can't find file to start app after build

I am using Typescript here and using node:latest in Docker, and I am using docker-compose as well,
I always failed to run it with docker-compose, when I run docker run ( manual ) it was work well,
here is my Dockerfile
FROM node:latest
RUN mkdir -p /home/myapp
WORKDIR /home/myapp
RUN npm i -g prisma2
ENV PATH /home/myapp/node_modules/.bin:$PATH
COPY package.json /home/myapp/
RUN npm install
COPY . /home/myapp
RUN prisma2 lift save --name 'init'
RUN prisma2 lift up
EXPOSE 8100
RUN npm run build
RUN pwd
RUN ls
RUN ls dist
CMD node dist/server.js
and my docker-compose.yml:
version: "3"
services:
app:
environment:
DB_URI: postgres://myuser:password#postgres:5555/prod
NODE_ENV: production
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules
ports:
- "8100:8100"
postgres:
container_name: postgres
image: postgres:10-alpine
ports:
- "5555:5555"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
POSTGRES_DB: prod
when it finishes doing CMD node /dist/server.js ( which folder I build because I am using TYpescript )
it gets an error like this :
Cannot find module '/home/edupro/dist/server.js'
I have to try to change volumes in docker-compose.yml as well like this:
- /home/myapp/node_modules:/home/myapp/node_modules
or
- ./:/home/myapp/node_modules
but still the same. do I miss something ? or did wrong mount?
how is the correct way to resolve that?
you need to remove the volume sections from your compose since that will overwrite all the files you build in your dockerfile, so delete this:
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules

Docker-compose and node container as not the primary one

I'm new to Docker and I've successfully set up the PHP/Apache/MySQL. But once I try to add the node container (in order to use npm) it always shuts the container down upon composing up. And yes, I understand that I can use node directly without involving docker, but I find it useful for myself.
And as for composer, I want to use volumes in the node container in order to persist node_modules inside of src folder.
I compose it up using docker-compose up -d --build command.
During composing it shows no errors (even node container seems to be successfully built).
If it might help, I can share the log file (it's too big to include it here).
PS. If you find something that can be improved, please let me know.
Thank you in advance!
Dockerfile
FROM php:7.2-apache
RUN apt-get update
RUN a2enmod rewrite
RUN apt-get install zip unzip zlib1g-dev
RUN docker-php-ext-install pdo pdo_mysql mysqli zip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer global require laravel/installer
ENV PATH="~/.composer/vendor/bin:${PATH}"
docker-compose.yml
version: '3'
services:
app:
build:
.
volumes:
- ./src:/var/www/html
depends_on:
- mysql
- nodejs
ports:
- 80:80
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: qwerty
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql:db
ports:
- 8765:80
environment:
MYSQL_ROOT_PASSWORD: qwerty
PMA_HOST: mysql
depends_on:
- mysql
nodejs:
image: node:9.11
volumes:
- ./src:/var/www/html
As this Dockerfile you are using shows, you are not actually runing any application in the node container so as soon as it builds and starts up - it shuts down because it has nothing else to do.
Solution is simple - provide a application that you want to run into the container and run it like:
I've modified a part of your compose file
nodejs:
image: node:9.11
command: node app.js
volumes:
- ./src:/var/www/html
Where app.js is the script in which your app is written, you are free to use your own name.
edit providing a small improvement you asked for
You are not waiting until your database is fully initialized (depends_on is not capable of that), so take a look at one of my previous answers dealing with that problem here

Resources