I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Related
After some months of development I got to a point where it is better to dockerize my MERN application. I managed to create .yaml file and everything is working OK but the problem is that I already have big amount of data that is collected. I want to be able to mount this data to container but I don't know how to do it. Read a lot of stuff but still my data is not appearing after composing the applications. Here is how my docker-compose.yaml file looks-like:
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- /mnt/c/temp/mongo/db:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:
As you can see in this row:
volumes:
- /mnt/c/temp/mongo/db:/data/db
I am trying to point the path from my C:\ drive but this doesn't work. I also tried the same row in:
volumes:
mongo_db:
(the bottom of file) but again without success. Basically my existing DB is on
C:\data\db
How can I point this to be the source of MongoDB service?
First, you need to create the dump from local MongoDB and copy those files to docker MongoDB. You can use these commands to create:
mongodump --uri 'mongodb://localhost:27017/yourdatabase' --archive=<your file> --gzip
mongorestore --uri 'mongodb://remotehost:27017/yourdatabase' --archive=<your file> --gzip
You should be able to access the docker from local host.
Note: Reference this answer if you don't get it correct.
You can do these changes on the path you are mounting to make data persistent. Create a new folder C:/data/docker_mongo to make data persistent.
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- C:/data/docker_mongo:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:
Normally I'd have an instance neo4j running in Docker, then in a script I access the driver like so:
self.driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("username", "password"))
I'm now putting this script itself into a Docker container, but I now get the error message:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address)
What uri (or other parameter) needs changing to access a neo4 Docker instance, from another docker container?
Within my docker-compose.yml, I have:
version: '3'
services:
neo4j:
container_name: neo4j
image: neo4j:3.5
restart: always
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=neo4j/start
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
appgui:
container_name: appgui
image: python:3.7.3-slim
build:
context: ./APPGUI/
volumes:
- ./APPGUI/:/usr/src/app/
restart: always
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 80:80
depends_on:
- neo4j
I also can't access my web app (http://localhost:5000)
Your service can't connect to localhost Neo4j, because it is inside a docker container, and localhost points to the docker containers instead of your local machine.
In this case, it is best to run both containers with docker-compose. You want to set the depends on feature in the other docker container. Here is an example docker-compose.yml file from my project.
version: '3.7'
services:
neo4j:
image: neo4j:4.1.2
restart: always
hostname: neo4jngs
container_name: neo4jngs
ports:
- 7474:7474
- 7687:7687
api:
build:
context: ./API
hostname: api
restart: always
container_name: api
ports:
- 3000:3000
depends_on:
- neo4j
As you can see, the api container is a service that will connect to Neo4j. Now you can change the driver settings to:
self.driver = GraphDatabase.driver(uri="bolt://neo4j:7687", auth=("username", "password"))
And you are good to go.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'
However Tomaž Bratanič provided me with some helpful tips to get a resolve.
I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.
I have 2 nodejs containers running each one a different app and I want to connect them to the same mongoDB.
Each one works well alone together it won't let me have multiple connections for the same container.
version: '3'
services:
app-dpp:
container_name: preProcessing-Docker
restart: always
build:
context: "."
dockerfile: "/preProcessing-Docker/Dockerfile-dpp"
ports:
- '80:3000'
links:
- mongo
app-df:
container_name: dataFusion-Docker
restart: always
build:
context: "."
dockerfile: "dataFusion-docker/Dockerfile-df"
ports:
- '80:3000'
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
volumes:
- data-volume:/data/db
volumes:
data-volume:
I have been getting this error:
"ERROR: for app-dpp Cannot create container for service app-dpp: Conflict. The container name "/preProcessing-Docker" is already in use by container "cc01c6f8189a50f95438309860dcb959232c7da4606054ec9d79b9340a532398". You have to remove (or rename) that container to be able to reuse that name."
I would like to have the same DB since it would be better to share the db between the two modules than to have two dbs.
You already have a container by the name preProcessing-Docker. Because docker cannot allow you to have multiple containers by the same name, it won't work.
Type docker container ls -a to list all of your containers and delete the one that exists by that name and docker-compose up later and it should work.
Hope it helps.
i am running application with multi containers as below..
feeder - is a simple nodejs container from image node:alpine
api - is nodejs container with expressjs from image node:alpine
ui-app - is react app container from image node:alpine
i am trying to call the api service in ui-app i am getting error as below
image to Console Log
image to Console Log
not sure what is causing the problem
if i access the services as http://192.168.99.100/ping it works (that is my docker machine default ip)...
but if i use container name like http://api:3200/ping it is not working...? please help..
the below is my docker-compose.
version: '3'
services:
feeder:
build: ./feeder
container_name: feeder
tty: true
depends_on:
- redis
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3100:3100"
networks:
- hmdanet
api:
build: ./api
container_name: api
tty: true
depends_on:
- feeder
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3200:3200"
networks:
hmdanet:
aliases:
- "hmda-api"
ui-app:
build: ./ui-app
container_name: ui-app
tty: true
depends_on:
- api
links:
- api
environment:
- IS_FROM_DOCKER=true
ports:
- "3000:3000"
networks:
- hmdanet
redis:
image: redis:latest
ports:
- '6379:6379'
networks:
- hmdanet
networks:
hmdanet:
driver: bridge
You can only use service name as a domain name when you are inside a container. In you case it's your browser making the call, it does not know what api is. In you web app, you should have an env like base url set to the ip of your docker machine or localhost.