Docker-compose and node container as not the primary one - node.js

I'm new to Docker and I've successfully set up the PHP/Apache/MySQL. But once I try to add the node container (in order to use npm) it always shuts the container down upon composing up. And yes, I understand that I can use node directly without involving docker, but I find it useful for myself.
And as for composer, I want to use volumes in the node container in order to persist node_modules inside of src folder.
I compose it up using docker-compose up -d --build command.
During composing it shows no errors (even node container seems to be successfully built).
If it might help, I can share the log file (it's too big to include it here).
PS. If you find something that can be improved, please let me know.
Thank you in advance!
Dockerfile
FROM php:7.2-apache
RUN apt-get update
RUN a2enmod rewrite
RUN apt-get install zip unzip zlib1g-dev
RUN docker-php-ext-install pdo pdo_mysql mysqli zip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer global require laravel/installer
ENV PATH="~/.composer/vendor/bin:${PATH}"
docker-compose.yml
version: '3'
services:
app:
build:
.
volumes:
- ./src:/var/www/html
depends_on:
- mysql
- nodejs
ports:
- 80:80
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: qwerty
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql:db
ports:
- 8765:80
environment:
MYSQL_ROOT_PASSWORD: qwerty
PMA_HOST: mysql
depends_on:
- mysql
nodejs:
image: node:9.11
volumes:
- ./src:/var/www/html

As this Dockerfile you are using shows, you are not actually runing any application in the node container so as soon as it builds and starts up - it shuts down because it has nothing else to do.
Solution is simple - provide a application that you want to run into the container and run it like:
I've modified a part of your compose file
nodejs:
image: node:9.11
command: node app.js
volumes:
- ./src:/var/www/html
Where app.js is the script in which your app is written, you are free to use your own name.
edit providing a small improvement you asked for
You are not waiting until your database is fully initialized (depends_on is not capable of that), so take a look at one of my previous answers dealing with that problem here

Related

Docker Multi-container connection with docker compose

I am trying to create a composition where two or more docker service can connect to each other in some way.
Here is my composition.
# docker-compose.yaml
version: "3.9"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
command: sh -c "yarn start"
restart: always
ports:
- "1337:1337"
env_file: ".env.project"
depends_on:
- "database"
links:
- "database"
Services
database
This is using a Image that is made with of Official Postgres Image.
Here is Dockerfile
FROM postgres:alpine
ENV POSTGRES_USER="root"
ENV POSTGRES_PASSWORD="password"
ENV POSTGRES_DB="strapi-postgres"
and using the default exposed port 5432 and forwarding to 5435 as defined in the Composition.
So the database service starts at some IPAddress that can be found using docker inspect.
project
This is a Image running a node application(strapi project configured to use postgres database).
Here is Dockerfile
FROM node:lts-alpine
WORKDIR /project
ADD package*.json .
ADD yarn.lock .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
and I am builing the Image using docker build. That gives me an Image with No Foreground Process.
Problems
When I was running the composition, the strapi-project container Exits with Error Code(0).
Solution: So I added command yarn start to run the Foreground Process.
As the project Starts it could not connect to database since it is trying to connect to 127.0.0.1:5432 (5432 since it should try to connect to the container port of database service and not 5435). This is not possible since this tries to connect to port 5432 inside the container strapi-project, which is not open for any process.
Solution: So I used the IPAddress that is found from the docker inspect and used that in a .env.project and passed this file to the project service of the Composition.
For Every docker compose up there is a incremental pattern(n'th time 172.17.0.2, n+1'th time 172.18.0.2, and so on) for the IPAddress of the Composition. So Everytime I run composition I need to edit the .env.project.
All of these are some hacky way to patch them together. I want some way to Create the Postgres database service to start first and then project to configure, connect, and to the database, start automatically.
Suggest me any edits, or other ways to configure them.
You've forgotten to put the CMD in your Dockerfile, which is why you get the "exited (0)" status when you try to run the container.
FROM node:lts-alpine
...
CMD yarn start
Compose automatically creates a Docker network and each service is accessible using its Compose container name as a host name. You never need to know the container-internal IP addresses and you pretty much never need to run docker inspect. (Other answers might suggest manually creating networks: or overriding container_name: and these are also unnecessary.)
You don't show where you set the database host name for your application, but an environment: variable is a common choice. If your database library doesn't already honor the standard PostgreSQL environment variables then you can reference them in code like process.env.PGHOST. Note that the host name will be different running inside a container vs. in your normal plain-Node development environment.
A complete Compose file might look like
version: "3.8"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
restart: always
ports:
- "1337:1337"
environment:
- PGHOST=database
env_file: ".env.project"
depends_on:
- "database"

Add Postgres database in Dockerfile

I have a problem to solve well I have a Dockerfile with a tomcat image, I would still like to add a database there with postgres only I have a problem how to put it all together because I have no idea how to create a database in the dockerfile. Here is my Docker:
FROM tomcat:9.0.65-jdk11
RUN apt update
RUN apt install vim -y
RUN apt-get install postgres12
COPY ROOT.war
WORKDIR /usr/local/tomcat
CMD ["catalina.sh", "run"]
There is more but it is the usual mkdir and COPY of files. Do you have any idea ? maybe write a bash script that runs inside when building a container and creates my database ? I know, some people will write me that I should make ubuntu image install tomcat and postgres there, but I want to simplify my work in assigning permissions and shorten my work.
Use docker-compose and create a database separately:
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
POSTGRES_USER: user
#volumes: # if you want to create your db
# - ./init.sql:/docker-entrypoint-initdb.d/init.sql
# ports: # if you want to access it from your host
# - 5432:5432
tomcatapp:
build: .
image: tomcatapp
restart: always
ports:
- 8080:8080
depends_on:
- db
Notes: the Dockerfile and init.sql (if you want to init your db) and docker-compose.yml should be in the same folder.

Running Gulp in a Node.js container, Error: Cannot find module but volumes seem right

I'm setting up an older WordPress theme into a Docker environment, going through the process of creating a docker-compose.yml file that lets me run various services in various containers.
I've gotten very far… nginx, mysql running with php and ssl. Working.
The final piece of the puzzle is setting up a container for Node that will run Gulp to build the final theme (a gulpfile that processes all the css and js).
I've been through dozens of answers on Stack Overflow and looking at many projects on github that are similar but not the same. They've taught me a lot, I think I'm a step or two away from a deep enough understanding to grasp what I'm missing.
The Node service is mounted to a volume mapped to the local directory where gulp needs to run. npm install builds node_modules somewhere, but not where I want, and even so, the end result is always…
internal/modules/cjs/loader.js:834
throw err;
^
Error: Cannot find module '/gulp'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:831:15)
…despite multiple attempts and mounting, working directories, COPYing and anything else that feels logical.
This is what I have so far…
docker-compose.yml
version: '3.9'
networks:
wordpress:
services:
nginx:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx
depends_on:
- php
- mysql
ports:
- 80:80
- 443:443
volumes:
- ./wordpress:/var/www/html:delegated
networks:
- wordpress
mysql:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
environment:
MYSQL_DATABASE: wpdb
MYSQL_USER: wpdbuser
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
restart: always
tty: true
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
networks:
- wordpress
php:
build:
context: .
dockerfile: php.dockerfile
container_name: php
volumes:
- ./wordpress:/var/www/html:delegated
networks:
- wordpress
wp:
build:
context: .
dockerfile: wp.dockerfile
container_name: wp
entrypoint: ['wp', '--allow-root']
links:
- mysql:mysql
volumes:
- ./wordpress:/var/www/html:delegated
networks:
- wordpress
phpmyadmin:
build:
context: .
dockerfile: phpmyadmin.dockerfile
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
depends_on:
- mysql
ports:
- 8081:80
- 8082:443
environment:
PMA_HOST: mysql
MYSQL_ROOT_PASSWORD: secret
restart: always
networks:
- wordpress
node:
build:
context: .
dockerfile: node.dockerfile
image: node:12.19.1-alpine3.9
container_name: nodejs
volumes_from:
- nginx
working_dir: /var/www/html/wp-content/themes/the-theme
networks:
- wordpress
Notable and I think relevant is…
nginx service mounts a local directory that has WP inside it.
wp service is wp-cli, not a wordpress image, instead wp is run to perform tasks such as
docker-compose run --rm wp core download
I want to run node gulp in much the same way as wp
the node.dockerfile is empty right now, but I'm conscious and interested in what someone else is doing with a shell script here.
Basically I'm knowledgeable enough to have a sense of where I'm messing up (with paths) but not enough to work out what to try next. Or to articulate a question.
I can show all the information I've gathered…
docker-compose run --rm node --version
…works great, result: v12.19.0
docker-compose run --rm node npm install Success. Then…
docker-compose run --rm node gulp -v
Error: Error: Cannot find module '/var/www/html/wp-content/themes/the-theme/gulp'
Not really knowing what I'm doing, assuming the node service is mounted to the theme directory I thought I'd try:
docker-compose run --rm node node_modules/gulp -v but that gave an error:
/usr/local/bin/docker-entrypoint.sh: exec: line 8: node_modules/gulp: Permission denied
I can confirm that a node_modules directory was indeed created where I wanted it to be when I ran npm install, and it had read from the packages.json and installed everything. I could run npm list and everything.
In case it's relevant, my project folder is…
- workspace
-- wordpress
--- wp-content
---- themes
----- the-theme
------ package.json
------ node_modules
-- docker-compose.yml
-- various-docker-files
…and it's inside that the-theme directory that I want to run gulp just like I would locally. It seems I can run npm install there, but nothing installed can be found.
I go on to attempt numerous things, such as setting the working_dir: to node_modules, or to mount none_modules.
I try things like docker-compose run --rm node ls and I can see the insides of the_theme, some linux system directories, and node_modules.
docker-compose run --rm node ls -al node_modules
…shows me all the installed node packages.
Multiple answers elsewhere suggest rebuilding the npm install etc, no effect. And I feel I'm hampered by the fact that many of these questions and answers feature simple node.js apps where people have an untouched ready-to-go package.json they can just COPY into a simple directly structure i.e. /app in their dockerfile, where I'm dealing with a package.json that has to be read from a deeper sub directory, on a mounted local volume. Perhaps confusing me.
Additional answers suggested that it would be correct to have my node service volumes like this…
volumes:
- ./wordpress:/var/www/html:delegated
- /var/www/html/wp-content/themes/the-theme/node_modules
working_dir: /var/www/html/wp-content/themes/the-theme
But that was more of the same. Gulp not found, and a discrepancy between what was in node_modules via docker-compose run and what was on my local node_modules.
I also at various points learned that perhaps my node_modules should not even be in my theme directory as it would be locally, that they should be off in the node container – which is reasonable, but I'm not sure how to approach it, while still having access to packages.json that lives persistently in ./wordpress/wp-content/themes/the-theme and also have access to css and js that is being updated in a sub directory of that.
To summarize:
I want to run
docker-compose run --rm node gulp build
on /wordpress/wp-content/themes/the-theme
and for the gulp command to do its thing, with output at
/wordpress/wp-content/themes/the-theme/dist or similar
Update #1:
node:
build:
context: .
dockerfile: node.dockerfile
image: node:12.19.1-alpine3.9
container_name: nodejs
depends_on:
- nginx
ports:
- 3001:3000
tty: true
volumes:
- ./wordpress:/var/www/html:delegated
- /var/www/html/wp-content/themes/inti-acf-starter/node_modules
working_dir: /var/www/html/wp-content/themes/inti-acf-starter
networks:
- wordpress
I've added tty: true so that I can -it into the node container and poke around.
This drops me in my working_dir. I can npm install here. I can see a node_modules directory. I cd into it and it's full. I try to run any of these I get:
sh: gulp: not found
OK. So npm install --global I guess.
This doesn't install everything in my package.json, and what is installed and appearing in node_modules under /var/www/html/wp-content/themes/the-theme
…is not reflected locally in ./wordpress/wp-content/themes/the-theme/node_modules
I feel I should remove that second volume and just use the locally mapped one. But I'm not sure what to do about the rest. Installing just gulp globally means I can run just that inside the container with -it, but the gulpfile full of requires now can't find any.
Update #2:
node:
build:
context: .
dockerfile: node.dockerfile
image: node:12.19.1-alpine3.9
container_name: nodejs
depends_on:
- nginx
ports:
- 3001:3000
tty: true
volumes:
- ./wordpress:/var/www/html:delegated
- /var/www/html/wp-content/themes/the-theme/node_modules
working_dir: /var/www/html/wp-content/themes/the-theme
networks:
- wordpress
I understand now that each npm install that node is actually dropping everything into:
/usr/local/lib/node_modules/inti-acf-starter/node_modules
With npm install -g gulp-cli#2.1.0 first I can now run Gulp!
My project compiles correctly. Well most of it. I have to keep the node service tty, and run npm and gulp inside of that only.
Running things like:
docker-compose run --rm node gulp build
…can't find gulp.

nginx doesnt see updated static content

I'm using docker-compose to set up nginx and node
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
- "443:443"
links:
- node:node
volumes_from:
- node
volumes:
- /etc/nginx/ssl:/etc/nginx/ssl
node:
container_name: node
build: .
env_file: .env
volumes:
- /usr/src/app
- ./logs:/usr/src/app/logs
expose:
- "8000"
environment:
- NODE_ENV=production
command: npm run package
I have node and nginx share the same volume so that nginx can serve the static content generated by node.
When i update the source code in node. I remove the node container and rebuild it via the below
docker rm node
docker-compose -f docker-compose.prod.yml up --build -d node
I can see that the new node container has the updated source code with the proper updated static content
docker exec -it node bash
root#e0cd1b990cd2:/usr/src/app# cat public/style.css
this shows the updated content i want to see
.project_detail .owner{color:#ccc;padding:10px}
However, when i login to the nginx container
docker exec -it nginx bash
root#a459b271e787:/# cat /usr/src/app/public/style.css
.project_detail .owner{padding:10px}
as you can see , nginx is not able to see the newly updated static files served by node - despite the node update. It however works if i restart the nginx container as well.
Am i doing something wrong? Do i have to restart both nginx and node containers to see the updated content?
Instead of sharing volume of one container with another, share a common directory on the host with both the containers. For example, if the directory is at /home/user/app, then it should be present in volumes section like:
volumes:
- /home/user/app:/usr/src/app
This should be done for both the containers.

Docker how to start nodejs app with redis in the Container?

I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.

Resources