unable to run gatbsy site locally with docker-compose - node.js

This is my first time using docker and I have downloaded a docker-compose.yml from https://github.com/wodby/docker4wordpress which is a repo with docker images for wordpress . I am planning to use wordpress as a backend and use created blog posts from it's API in a Gatsby site . My problem is that since I don't have experience with docker I am unable to
run my gatsby site locally using the node image the above link provides inside my docker-compose.yml . I can run wordpress succesfully however as I had help from my boss to configure the above file before trying to use gatsby and node .
My project structure :
Wordpress_Project
|
+-- numerous wordpress folders
|
+-- gatsby_site folder
|
+-- docker-compose.yml
My docker-compose.yml
version: "3"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- bedrock_dbdata:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: www-data
PHP_FPM_GROUP: www-data
# # Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_MODE: debug
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_CLIENT_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_CLIENT_HOST: host.docker.internal # Docker 18.03+ Mac/Win
# PHP_XDEBUG_CLIENT_HOST: 10.0.75.1 # Windows
# PHP_XDEBUG_LOG: /tmp/php-xdebug.log
volumes:
- ./:${CODEBASE}:cached
## Alternative for macOS users: Mutagen https://wodby.com/docs/stacks/wordpress/local#docker-for-mac
# - mutagen:/var/www/html
# # For XHProf and Xdebug profiler traces
# - files:/mnt/files
crond:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_crond"
environment:
CRONTAB: "0 * * * * wp cron event run --due-now --path=${CODEBASE}/web/wp"
command: sudo -E LD_PRELOAD=/usr/lib/preloadable_libiconv.so crond -f -d 0
volumes:
- ./:${CODEBASE}:cached
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
- "traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
NGINX_SERVER_NAME: ${PROJECT_BASE_URL}
NGINX_SERVER_ROOT: ${CODEBASE}/web
volumes:
- ./:${CODEBASE}:cached
## Alternative for macOS users: Mutagen https://wodby.com/docs/stacks/wordpress/local#docker-for-mac
# - mutagen:/var/www/html
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer-ce
container_name: "${PROJECT_NAME}_portainer"
command: -H unix:///var/run/docker.sock
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.services.${PROJECT_NAME}_portainer.loadbalancer.server.port=9000"
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.4
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '80:80'
# - '8080:8080' # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
#this is the node image where I want to run gatsby locally
node:
image: wodby/node:$NODE_TAG
container_name: "${PROJECT_NAME}_node"
working_dir: /app
labels:
- "traefik.http.services.${PROJECT_NAME}_node.loadbalancer.server.port=3000"
- "traefik.http.routers.${PROJECT_NAME}_node.rule=Host(`node.${PROJECT_BASE_URL}`)"
expose:
- "3000"
volumes:
- ./gatsby_site:/app
command: sh -c 'npm install && npm run start'
volumes:
bedrock_dbdata:
external: true
I am trying to configure the above node image but I am unsuccessfull so far . I try accessing the node url on localhost only to get error 404 .
I would appreciate your help.

Related

Redirecting API call to a dockerized node-express server with nginx

I have a node-express server running inside a dockerized container with expose port 3001. My nginx is installed on OS with sudo apt install nginx. What I want is everytime a call is made to app.domain.com:3001 I want to redirect that call to localhost:3001.I am new to nginx configuration. I would prefer I could do the same with *.conf file in conf.d folder of nginx.Also the response of API should have same domain .domain.com so that I can set httpOnly cookies on a angular app running on app.domain.com
My node docker-compose file:
version: "3"
services:
container1:
image: node:18.12-alpine
working_dir: /usr/src/app
container_name: container1
depends_on:
- container_mongodb
restart: on-failure
env_file: .env
ports:
- "$APP_PORT:$APP_PORT"
volumes:
- .:/usr/src/app
networks:
- network1
command: ./wait-for.sh container_mongodb:$MONGO_PORT -- npm run dev
container_mongodb:
image: mongo:6.0.3
container_name: container_mongodb
restart: on-failure
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
ports:
- "$MONGO_EXPOSE_PORT:$MONGO_PORT"
volumes:
- container_mongodb_data:/data/db
- ./src/config/auth/mongodb.key:/data/mongodb.key
networks:
- network1
entrypoint:
- bash
- -c
- |
cp /data/mongodb.key /data/mongodb.copy.key
chmod 400 /data/mongodb.copy.key
chown 999:999 /data/mongodb.copy.key
exec docker-entrypoint.sh $$#
command: ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/data/mongodb.copy.key"]
networks:
network1:
external: true
volumes:
container_mongodb_data:

Docker-compose confuses building a frontend with a backend

Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?
RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up

Flask App runs locally but errors out on DigitalOcean

The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!
When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.

Build using docker-compose and run app and mongo using docker run

I have a docker-compose file in below format
services:
production-api:
build: .
environment:
- MONGO_URI=mongodb://mongodb:27017/productiondb
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mongodb
# condition: service_healthy
mongodb:
# user: node
image: mongo
# volumes:
# - ./data:/data/db
ports:
- "27017:27017"
# healthcheck:
# test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/productiondb --quiet 1
# interval: 10s
# timeout: 10s
# retries: 5
and docker-compose.prod.yml
version: "2.4"
services:
production-api:
container_name: "production"
command: yarn start
environment:
- NODE_ENV=production
How do I build using docker-compose -f and run both mongo and production using docker run?

docker-compose up didn't finish npm install.

I'm new to docker-compose and I'd like to use it for my current development.
after I ran docker-compose up -d everything was starting ok and it looks good. But my nodejs application wasn't installed correctly. It seems like npm install wasn't complete and I had to do docker exec -it api bash to run npm i manually inside the container.
Here's my docker-compose.
version: '2'
services:
app:
build: .
container_name: sparrow-api-1
volumes:
- .:/usr/src/app
- $HOME/.aws:/root/.aws
working_dir: /usr/src/app
environment:
- SPARROW_EVENT_QUEUE_URL=amqp://guest:guest#rabbitmq:5672
- REDIS_URL=redis
- NSOLID_APPNAME=sparrow-api
- NSOLID_HUB=registry:4001
- NODE_ENV=local
- REDIS_PORT=6379
- NODE_PORT=8081
- SOCKET_PORT=8002
- ELASTICSEARCH_URL=elasticsearch
- STDIN_OPEN=${STDIN_OPEN}
networks:
- default
depends_on:
- redis
- rabbitmq
- elasticsearch
expose:
- "8081"
ports:
- "8081:8081"
command: bash docker-command.sh
redis:
container_name: redis
image: redis:3.0.7-alpine
networks:
- default
ports:
- "6379:6379"
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.6.2-management
networks:
- default
ports:
- "15672:15672"
elasticsearch:
container_name: elasticsearch
image: elasticsearch:1.5.2
networks:
- default
ports:
- "9200:9200"
- "9300:9300"
registry:
image: nodesource/nsolid-registry
container_name: registry
networks:
- default
ports:
- 4001:4001
proxy:
image: nodesource/nsolid-hub
container_name: hub
networks:
- default
environment:
- REGISTRY=registry:4001
- NODE_DEBUG=nsolid
console:
image: nodesource/nsolid-console
container_name: console
networks:
- default
environment:
- NODE_DEBUG=nsolid
- NSOLID_APPNAME=console
- NSOLID_HUB=registry:4001
command: --hub hub:9000
ports:
- 3000:3000
# don't forget to create network as well
networks:
default:
driver: bridge
Here's my docker-command.sh
#!/usr/bin/env bash
# link the node modules to the root directory of our app, if not exists
modules_link="/usr/src/app/node_modules"
if [ ! -d "${modules_link}" ]; then
ln -s /usr/lib/app/node_modules ${modules_link}
fi
if [ -n "$STDIN_OPEN" ]; then
# if we want to be interactive with our app container, it needs to run in
# the background
tail -f /dev/null
else
nodemon
fi
Here's my Dockerfile
FROM nodesource/nsolid:latest
RUN mkdir /usr/lib/app
WORKDIR /usr/lib/app
COPY [".npmrc", "package.json", "/usr/lib/app/"]
RUN npm install \
&& npm install -g mocha \
&& npm install -g nodemon \
&& rm -rf package.json .npmrc
In your Dockerfile you are running npm install without any arguments first:
RUN npm install \
&& npm install -g mocha \
This will cause a non-zero exit code and due to the && the following commands are not executed. This should also fail the build though, so I'm guessing you already had a working image and added the npm instructions later. To rebuild the image use docker-compose build or simply docker-compose up --build. Per default docker-compose up will only build the image if it did not exist yet.

Resources