I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.
Related
I am trying to create a composition where two or more docker service can connect to each other in some way.
Here is my composition.
# docker-compose.yaml
version: "3.9"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
command: sh -c "yarn start"
restart: always
ports:
- "1337:1337"
env_file: ".env.project"
depends_on:
- "database"
links:
- "database"
Services
database
This is using a Image that is made with of Official Postgres Image.
Here is Dockerfile
FROM postgres:alpine
ENV POSTGRES_USER="root"
ENV POSTGRES_PASSWORD="password"
ENV POSTGRES_DB="strapi-postgres"
and using the default exposed port 5432 and forwarding to 5435 as defined in the Composition.
So the database service starts at some IPAddress that can be found using docker inspect.
project
This is a Image running a node application(strapi project configured to use postgres database).
Here is Dockerfile
FROM node:lts-alpine
WORKDIR /project
ADD package*.json .
ADD yarn.lock .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
and I am builing the Image using docker build. That gives me an Image with No Foreground Process.
Problems
When I was running the composition, the strapi-project container Exits with Error Code(0).
Solution: So I added command yarn start to run the Foreground Process.
As the project Starts it could not connect to database since it is trying to connect to 127.0.0.1:5432 (5432 since it should try to connect to the container port of database service and not 5435). This is not possible since this tries to connect to port 5432 inside the container strapi-project, which is not open for any process.
Solution: So I used the IPAddress that is found from the docker inspect and used that in a .env.project and passed this file to the project service of the Composition.
For Every docker compose up there is a incremental pattern(n'th time 172.17.0.2, n+1'th time 172.18.0.2, and so on) for the IPAddress of the Composition. So Everytime I run composition I need to edit the .env.project.
All of these are some hacky way to patch them together. I want some way to Create the Postgres database service to start first and then project to configure, connect, and to the database, start automatically.
Suggest me any edits, or other ways to configure them.
You've forgotten to put the CMD in your Dockerfile, which is why you get the "exited (0)" status when you try to run the container.
FROM node:lts-alpine
...
CMD yarn start
Compose automatically creates a Docker network and each service is accessible using its Compose container name as a host name. You never need to know the container-internal IP addresses and you pretty much never need to run docker inspect. (Other answers might suggest manually creating networks: or overriding container_name: and these are also unnecessary.)
You don't show where you set the database host name for your application, but an environment: variable is a common choice. If your database library doesn't already honor the standard PostgreSQL environment variables then you can reference them in code like process.env.PGHOST. Note that the host name will be different running inside a container vs. in your normal plain-Node development environment.
A complete Compose file might look like
version: "3.8"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
restart: always
ports:
- "1337:1337"
environment:
- PGHOST=database
env_file: ".env.project"
depends_on:
- "database"
For a project at university I am working on I am trying to get a MEAN stack website up and running via docker images and containers. However, when I run the command:
docker-compose up --build
It results in this: nodejs permanently restarting.
When the command is run, I get these messages, at various points, which look like errors to me:
failed to get console mode for stdout: The handle is invalid.
and
nodejs exited with code 0
and then it seems like the connection to the MongoDB keeps starting and ending with these errors:
db | 2021-03-30T08:50:22.519+0000 I NETWORK [listener] connection accepted from 172.21.0.2:39627 #27 (1 connection now open)
db | 2021-03-30T08:50:22.519+0000 I NETWORK [conn27] end connection 172.21.0.2:39627 (0 connections now open)
Prior to running the above command I have tested the website will work without a connection to MongoDB with: docker build . in the Angular root folder containing the Dockerfile and the Express API aspect of it works as I can visit the dashboard at http://localhost:3000/.
The full command process I run to achieve the failed state image linked above is as follows:
docker-compose pull → docker-compose build → docker-compose up --build.
I am using Docker Desktop and running the commands in Powershell on Windows 10 Pro.
My Dockerfile is as follows:
# We use the official image as a parent image.
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
# Set the working directory.
WORKDIR /home/node/app
# Copy the file(s) from your host to your current location.
COPY package*.json ./
# Change the user to node. This will apply to both the runtime user and the following commands.
USER node
# Run the command inside your image filesystem.
RUN npm install
COPY --chown=node:node . .
# Building the webstie
RUN ./node_modules/.bin/ng build
# Add metadata to the image to describe which port the container is listening on at runtime.
EXPOSE 3000
# Run the specified command within the container.
CMD [ "node", "server.js" ]
And my docker-compose.yml is:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon server.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
These are the same files that have been provided by the University and supposedly work.
I am new to Docker and containerization but shall try and provide you with any additional information, should you need it.
I'm new to Docker and I've successfully set up the PHP/Apache/MySQL. But once I try to add the node container (in order to use npm) it always shuts the container down upon composing up. And yes, I understand that I can use node directly without involving docker, but I find it useful for myself.
And as for composer, I want to use volumes in the node container in order to persist node_modules inside of src folder.
I compose it up using docker-compose up -d --build command.
During composing it shows no errors (even node container seems to be successfully built).
If it might help, I can share the log file (it's too big to include it here).
PS. If you find something that can be improved, please let me know.
Thank you in advance!
Dockerfile
FROM php:7.2-apache
RUN apt-get update
RUN a2enmod rewrite
RUN apt-get install zip unzip zlib1g-dev
RUN docker-php-ext-install pdo pdo_mysql mysqli zip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer global require laravel/installer
ENV PATH="~/.composer/vendor/bin:${PATH}"
docker-compose.yml
version: '3'
services:
app:
build:
.
volumes:
- ./src:/var/www/html
depends_on:
- mysql
- nodejs
ports:
- 80:80
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: qwerty
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql:db
ports:
- 8765:80
environment:
MYSQL_ROOT_PASSWORD: qwerty
PMA_HOST: mysql
depends_on:
- mysql
nodejs:
image: node:9.11
volumes:
- ./src:/var/www/html
As this Dockerfile you are using shows, you are not actually runing any application in the node container so as soon as it builds and starts up - it shuts down because it has nothing else to do.
Solution is simple - provide a application that you want to run into the container and run it like:
I've modified a part of your compose file
nodejs:
image: node:9.11
command: node app.js
volumes:
- ./src:/var/www/html
Where app.js is the script in which your app is written, you are free to use your own name.
edit providing a small improvement you asked for
You are not waiting until your database is fully initialized (depends_on is not capable of that), so take a look at one of my previous answers dealing with that problem here
Lets say we have three services
- php+ apache
- mysql
- nodejs
I know how to use docker-compose to setup application to link mysql with php apache service. I was wondering how we can add node.js service just to manage
js/css assets. The purpose of node.js service is to just manage javascript/css resources. Since docker provides this flexibility I was wondering to use docker service instead of setting up node.js on my host computer.
version: '3.2'
services:
web:
build: .
image: lap
volumes:
- ./webroot:/var/www/app
- ./configs/php.ini:/usr/local/etc/php/php.ini
- ./configs/vhost.conf:/etc/apache2/sites-available/000-default.conf
links:
- dbs:mysql
dbs:
image: mysql
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=rest
- MYSQL_DATABASE=symfony_rest
- MYSQL_USER=restman
volumes:
- /var/mysql:/var/lib/mysql
- ./configs/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
node:
image: node
volumes:
- ./webroot:/var/app
working_dir: /var/app
I am not sure this is correct strategy , I am sharing ./webroot with both web and node service. docker-compose up -d only starts mysql and web and fails to start node container , probably there is not valid entrypoint set.
if you want to use node js separate from PHP service you must set two more options to make node stay up, one is stdin_open and the other one is tty like bellow
stdin_open: true
tty: true
this is equivalent to CLI command -it like bellow
docker container run --name nodeapp -it node:latest
if you have a separate port to run your node app (e.g. your frontend is completely separate from your backend and you must run it independently from your backend like you must run npm run start command in order to run the frontend app) you must publish your port like bellow
ports:
- 3000:3000
ports structure is systemPort:containerInnerPort.
this means publish port 3000 from inside node container to port 3000 on the system, in another way your make port 3000 inside your container accessible on your system and you can access this port like localhost:3000.
in the end, your node service would be like bellow
node:
image: node
stdin_open: true
tty: true
volumes:
- ./webroot:/var/app
working_dir: /var/app
You can also add nginx service to docker-compose, and nginx can take care of forwarding requests to php container or node.js container. You need some server that binds to 80 port and redirect requests to designated container.
I'm using docker-compose to set up nginx and node
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
- "443:443"
links:
- node:node
volumes_from:
- node
volumes:
- /etc/nginx/ssl:/etc/nginx/ssl
node:
container_name: node
build: .
env_file: .env
volumes:
- /usr/src/app
- ./logs:/usr/src/app/logs
expose:
- "8000"
environment:
- NODE_ENV=production
command: npm run package
I have node and nginx share the same volume so that nginx can serve the static content generated by node.
When i update the source code in node. I remove the node container and rebuild it via the below
docker rm node
docker-compose -f docker-compose.prod.yml up --build -d node
I can see that the new node container has the updated source code with the proper updated static content
docker exec -it node bash
root#e0cd1b990cd2:/usr/src/app# cat public/style.css
this shows the updated content i want to see
.project_detail .owner{color:#ccc;padding:10px}
However, when i login to the nginx container
docker exec -it nginx bash
root#a459b271e787:/# cat /usr/src/app/public/style.css
.project_detail .owner{padding:10px}
as you can see , nginx is not able to see the newly updated static files served by node - despite the node update. It however works if i restart the nginx container as well.
Am i doing something wrong? Do i have to restart both nginx and node containers to see the updated content?
Instead of sharing volume of one container with another, share a common directory on the host with both the containers. For example, if the directory is at /home/user/app, then it should be present in volumes section like:
volumes:
- /home/user/app:/usr/src/app
This should be done for both the containers.