Setting up MongoDB in a docker container with Certbot's certificates - linux

How to I configure my MongoDB's ssl certificates?
I want to host my MongoDB myself. I currently have a Linode container running, on it I've installed certbot and had it acquire certificates for the domain I want to use for my database.
I'm using this docker-compose.yml file to deploy the MongoDB container:
version: '2'
services:
mongo:
image: mongo:latest
volumes:
- ./db-data:/data/db
- ./mongo-config:/data/config
- ./certs:/data/certs
ports:
- "0.0.0.0:27017:27017"
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=exampleuser
- MONGO_INITDB_ROOT_PASSWORD=examplepassword
command: --config=/data/config/mongo.conf
and before someone mentions using nginx streams, certbot doesn't support them (see issue)
An my mongo config found at ./mongo-config/mongo.conf:
net:
port: 27017
bindIp: 0.0.0.0
ssl:
mode: requireSSL
PEMKeyFile: # This is where I need help, it would be in /data/certs from the container's perspective
Every guide mentions copying Certbot's files into a docker volume, so I setup ./certs to mount into /data/certs and copied them their. I've tried ever combination of every permutation of the files Certbot creates in the PEMKeyFile and the CAFile fields and nothing works. I'd get this error every time I tried:
error:0909006C:PEM routines:get_name:no start line
Guides I've already tried:
How to: Configure SSL For MongoDB
Configure mongod and mongos for TLS/SSL
Certbot User Guide
Setup Mongo 3.6 TSL/SSL with Letsencrypt | this one mentioned that I have to download a certificate that expires TODAY (sept 30, 2021)
Securing MongoDB with TLS, Authentication and LetsEncrypt
Related: LetsEncrypt SSL Certificate Validation Failed with MongoDB

This is working for me:
docker run -it --name data.domain.com --network docker_network -v /path/to/ssl/certs:/ssl:ro -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=pass mongo --tlsMode requireTLS --tlsCertificateKeyFile /ssl/fullchain-key.pem --tlsCAFile /etc/ssl/certs/ISRG_Root_X1.pem
Important things:
name of docker container has to match certificate
do 'cat fullchain.pem privkey.pem > fullchain-key.pem'
You have to adapt this to docker-compose. Try this out and if you need more help, let me know

Related

I can't connect a nodejs app to a redis server using docker

Good morning guys.
I'm having a problem connecting a nodejs application, in a container, to another container that contains a redis server. On my local machine I can connect the application to this redis container without any problem. However, when trying to upload this application in a container, a timeout error is returned.
I'm new to docker and I don't understand why I can connect to this docker container in the application running locally on my machine but that same connection doesn't work when I upload the application in a container.
I tried using docker-compose, but from what I understand it will upload in another container to the redis server, instead of using the redis container that is already in docker.
To connect to redis I'm using the following code:
createClient({
socket: {
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT)
}
});
Where REDIS_HOST is the address of my container running on the server and REDIS_PORT is the port where this container is running on my server.
To run redis on docker I used the following guide: https://redis.io/docs/stack/get-started/install/docker/
I apologize if my problem was not very clear, I'm still studying docker.
You mentioned you are using Docker Compose. Here's an example showing how to start Redis in a container, and make your Node application wait for that container then use an environment variable in your Node application to specify the name of the host to connect to Redis on. In this example it connects to the container running Redis that I've called "redis":
version: "3.9"
services:
redis:
container_name: redis_kaboom
image: "redislabs/redismod"
ports:
- 6379:6379
volumes:
- ./redisdata:/data
entrypoint:
redis-server
--loadmodule /usr/lib/redis/modules/rejson.so
--appendonly yes
deploy:
replicas: 1
restart_policy:
condition: on-failure
node:
container_name: node_kaboom
build: .
volumes:
- .:/app
- /app/node_modules
command: sh -c "npm run load && npm run dev"
depends_on:
- redis
ports:
- 8080:8080
environment:
- REDIS_HOST=redis
So in your Node code you'd then use the value of process.env.REDIS_HOST to connect to the right Redis host. Here, I'm not using a password or a non-standard port, you could also supply those as environment variables that match the configuration of the Redis container in Docker Compose too if you needed to.
Disclosure: I work for Redis.

communicating between docker instances of neo4j and express on local machine

ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT

Docker Express Node.js app container not connecting with MongoDB container giving error TransientTransactionError

I am under a weird dilemma. I have created a node application and this application needs to connect to MongoDB (through docker container) I created a docker-compose file as follows:
version: "3"
services:
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
my-node:
image: "<MY_IMAGE_PATH>:latest"
deploy:
replicas: 1
restart_policy:
condition: on-failure
working_dir: /opt/app
ports:
- "2000:2000"
volumes:
- ./mod:/usr/app
networks:
- webnet
command: "node app.js"
networks:
webnet:
I am using mongo official image. I have ommited my docker image from the above configuration .I have set up with many configuration but i am unable to connect to mongoDB (yes i have changed the MongoDB uri inside the node.js application too). but whenever i am deploying my docker-compose my application on start up gives me always MongoNetworkError of TransientTransactionError. I am unable to understand where is the problem since many hours.
One more weird thing is when i running my docker-compose file i receive following logs:
Creating network server_default
Creating network server_webnet
Creating service server_mongo
Creating service server_feed-grabber
Could it be that both services are in a different network? If yes then how to fix that?
Other Info:
In node.js application MongoDB uri that i tried is
mongodb://mongo:27017/MyDB
I am running my docker-compose with the command: docker stack deploy -c docker-compose.yml server
My node.js image is Ubuntu 18
Anyone can help me with this?
Ok So i have tried few things and i figured out at last after spending many many hours. There were two things i was doing wrong and they were hitting me on last point:
First thing logging of the startup of docker showed me that it was creating two networks server_default and server_webnet this is the first mistake. Both containers should be in the same network while working.
The second thing I needed to run the Mongo container first as my Node.js application depend_on the Mongo container to be run first. This is exactly what I did in my docker-compose configuration by introducing depend_on property.
For me it was:
1- get your ip by running command
docker-machine ip
2- don't go to localhost/port, go to your ip/port, example : http://192.168.99.100:8080

Alpine Linux docker set hostname

I'm using lwieske/java-8:server-jre-8u121-slim with Alpine Linux
I'd like to set hostname from a text file to be seen globally (for all shells)
/ # env
HOSTNAME=2fa4a43a975c
/ # cat /etc/afile
something
/ # hostname -F /etc/afile
hostname: sethostname: Operation not permitted
everything running as a service in swarm
i want every node to have unique hostname based on container id.
You can provide the --hostname flag to docker run as well:
docker run -d --net mynet --ip 162.18.1.1 --hostname mynodename
As for workaround, can use docker-compose to assign the hostnames for multiple containers.
Here is the example docker-compose.yml:
version: '3'
services:
ubuntu01:
image: ubuntu
hostname: ubuntu01
ubuntu02:
image: ubuntu
hostname: ubuntu02
ubuntu03:
image: ubuntu
hostname: ubuntu03
ubuntu04:
image: ubuntu
hostname: ubuntu04
To make it dynamic, you can generatedocker-compose.yml from the script.
Then run with: docker-compose up.
docker service create has a --hostname parameter that allows you to specify the hostname. On a more personal note, if you'll connect to one of your services, then any other service on the same network will be pingable and accessible using the service name, with the added benefit of allowing you multiple replicas without worrying about what those will be named.
Better to be late than never. Found this Q trying to find the same thing myself.
The answer is to give the docker container SYS_ADMIN capability and 'hostname -F' will now set the hostname properly.
docker-compose:
cap_add:
- SYS_ADMIN

How to connect nodeJS docker container to mongoDB

I have problems to connect a nodeJS application which is running as a docker container to a mongoDB. Let me explain what I have done so far:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3732cc1d90 mongo:3.4 "docker-entrypoint..." 3 weeks ago Up 3 weeks 27017/tcp mongo_live
As you can see, there is already a mongo docker container running.
Now I'm running my nodeJS application docker container (which is a build from meteorJS):
$ docker run -it 0b422defbd59 /bin/bash
In this docker container I want to run the application by running:
$ node main.js
Now I'm getting the error
Error: MONGO_URL must be set in environment
I already tried to set MONGO_URL by setting:
ENV MONGO_URL mongodb://mongo_live:27017/
But this doesn't work:
MongoError: failed to connect to server [mongo_live:27017] on first connect
So my question is how to connect to a DB, which is - as far as I understand - 'outside' of the running container. Alternativly how do I set up a new DB to this container?
There are couple of ways to do it.
run your app in the same network as your mongodb:
docker run --net container:mongo_live your_app_docker_image
# then you can use mongodb in your localhost
$ ENV MONGO_URL mongodb://localhost:27017/
Also you can link two containers:
docker run --link mongo_live:mongo_live you_app_image ..
# Now mongodb is accessible via mongo_live
use mongodb container ip address:
docker inspect -f '{{.NetworkSettings.IPAddress}}' mongo_live
# you will get you container ip here
$ docker run -it 0b422defbd59 /bin/bash
# ENV MONGO_URL mongodb://[ip from previous command]:27017/
You can bind your mongodb port to your host and use host's hostname in your app
You can use docker network and run both apps in the same network
You could pass --add-host mongo_live:<ip of mongo container> to docker run for your application and then use mongo_live for mongodb url
You can also use docker compose to make your life easier ;)
...
When you run containers each container works in independent network. Because one container cant connect to other point to point.
The are 3 ways to connect containers
Have a little fuss with low-level docker network magic
Connect container through localhost. Each container must expose ports on localhost (as your mongo_live). But you need add to host ile on localhost 127.0.0.1 mongo_live (This is the simplest way)
Use docker-compose. It convenient tool for working many containers together. (This is right way)
Add mongodb to application container is not docker way.
Please use below snippet for your docker-compose.yml file, replace comments with your actuals. Should solve your problem.
version: '2'
services:
db:
build: <image for mongoDB>
ports:
- "27017:27017" # whatever port u r using
environment:
#you can specify mondo db username and stuff here
volumes:
- #load default config for mondodb from here
- "db-data-store:/data/db" # path depends on which image you use
networks:
- network
nodejs:
build: #image for node js
expose:
- # mention port for nodejs
volumes:
- #mount project code on container
networks:
- network
depends_on:
- db
networks:
network:
driver: bridge
Please use the below links for references :
1) NodeJs Docker
2) MongoDb docker
3) docker-compose tutorial
Best of Luck
I had problem how to connect my server.js to mongodb. And that's how i solved it hope you find it useful.
Tap For My Screenshot

Resources