I have the following setup in docker-compose.yml:
....
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.4
node:
container_name: node
build:
context: ./node/
dockerfile: ./Dockerfile
depends_on:
- logstash
....
I'm using the package winston-logstash to wire them together.
This is the transport layer:
const logstashHost = process.env.LOGSTASH_HOST || 'logstash'
const logstashPort = process.env.LOGSTASH_PORT || 5045
new (winstonLogstash.Logstash)({
host: logstashHost,
port: logstashPort,
node_name: 'node',
timestamp: true,
max_connect_retries: 5,
timeout_connect_retries: 1000,
})
And the pipeline configuration:
input {
tcp {
port => 5045
}
}
output {
stdout{}
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
Using docker-compose up results in Error: Max retries reached, transport in silent mode, OFFLINE
If I manually start the server either using a large setTimeout or incrementing the number of connection retries it finally works. It works too if I start logstash and after a while I start node container.
The problem is that obviously this is not a good practice, I can't guess how long logstash will take to start, and the depends_on directive inside docker-compose.yml doesn't help at all.
I need a way to know when logstash is ready and start the node container after that.
Docker compose does not wait until a container is ready, it will only wait until it's running.
depends_on will only ensure that logstash launches before your node container, but again, this doesn't mean it will wait until it is ready.
You can either handle the checks yourself on node, or use a wrapper script. In docker-compose documentation, they recommend, wait-for-it or dockerize
You can read more on this in here
Custom wrapper
Your node container command can change from node index.js (or whatever you have), to bash wait-for-logtash.sh:
#!/bin/bash
## Or whatever command is used for checking logstash availability
until curl 'http://logstash:5045' 2> /dev/null; do
echo "Waiting for logtash..."
sleep 1;
done
# Start your server
node index.js
Related
We are using docker-compose to run some api tests. In the background, the API performs CRUD operations on a cosmsodb. The test run is supposed to run without creating and using a real cosmosdb, so i use the cosmosdb emulator as a docker image.
version: "3.7"
services:
cosmosdb:
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
container_name: cosmosdb
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
healthcheck:
test:
[
"CMD",
"curl",
"-f",
"-k",
"https://localhost:8081/_explorer/emulator.pem",
]
interval: 10s
timeout: 1s
retries: 5
ports:
- "8081:8081"
init:
build:
context: init
depends_on:
cosmosdb:
condition: service_healthy
The script even has a loop to see if the db is ready, before writing anything. It works roughly like this:
const client = new CosmosClient({
endpoint: `https://cosmosdb:8081/`,
key: 'C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==',
connectionPolicy: { requestTimeout: 10000 },
});
async function isDbReady(): Promise<boolean> {
try {
await client.databases.readAll().fetchAll();
return true;
} catch (err) {
console.log('database not ready', err.message);
return false;
}
}
async function waitForDb(): Promise<void> {
while(!await isDbReady()) {
await new Promise((resolve) => setTimeout(resolve, 10000));
}
}
the problem, that we have is, when our script (javascript with #azure/cosmos) is trying to create a database, a couple of collections and then inserting a couple of items, the cosmosdb sometimes (about 20% of the tests) will just stop responding and run into timeouts. This will persist until we run docker-compose down and rerun docker-compose up for the next try.
We run a slightly modified version of the image, where we just installed curl for the healthcheck to run (the same issue happens when directly using mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator which is why i simple added that image to the docker compose snippet). We use the healthcheck for cosmosdb emulator as suggested here: How to check if the Cosmos DB emulator in a Docker container is done booting?
Defining a docker-compose volume and mounting it as /tmp/cosmos/appdata also won't improve the situation.
We are also not sure how to set AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE as we actually would like to start each test run with a clean database and have our script insert data.
How can we get the cosmosdb emulator to be more stable?
This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 2 years ago.
I have a microservice application that uses RabbitMQ. How can I run the RabbitMQ consumer from the application backend container only after the RabbitMQ is up and running. My compose file is as below.
certichain_backend:
depends_on:
- rabbitmq
working_dir: /app/backend/src
command: sh sleep 20 & nohup node /app/backend/src/services/amqp_consumer.js && npm run start;
rabbitmq:
image: "rabbitmq:3-management"
hostname: "rabbitmq"
restart: always
expose:
- 15672
- 5672
labels:
NAME: "rabbitmq"
volumes:
- ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
I have given the backend the 'depends_on' to rabbitmq. But what I have observed is, the rabbitmq container starting process is initiated. But the backend container is not waiting for the rabbitmq container to up completely. My consumer is running along with the backend. So the consumer cannot connect to amqp server since it is not running at that moment. Thus I added a sleep parameter. So that it gets some time while rabbitmq to bringing up.
This method is very inconsistent. I'm sure this is not the right way to achieve this.
In your nodejs code you can add feature to terminate process with exit 1, if rabbitmq container is unreachable.
rabbitmq.connect('rabbitmq://guest:guest#rabbitmq',{})
.then(function(connection){
console.log('Rabbitmq connection established');
// other code here
})
.catch(function(errror) {
console.error('%s while dialing rabbitmq', error.message);
process.exit(1);
});
and in docker-compose file you can add restart: on-failure, so, if rabbitmq container have not started, nodejs application fails to start and is restarted until rabbitmq container is ready.
It can be worth to make rabbitmq connection establishment one of first actions nodejs application does - so, if there is no rabbitmq, nothing starts.
I'm trying to use Docker Compose to connect a Node.js container to a Postgres container. I can run the Node server fine, and can also connect to the Postgres container fine from my local as I've mapped the ports, but I'm unable to get the Node container to connect to the database.
Here's the compose YAML file:
version: "3"
services:
api_dev:
build: ./api
command: sh -c "sleep 5; npm run dev" # I added the sleep in to wait for the DB to load but it doesn't work either way with or without this sleep part
container_name: pc-api-dev
depends_on:
- dba
links:
- dba
ports:
- 8001:3001
volumes:
- ./api:/home/app/api
- /home/app/api/node_modules
working_dir: /home/app/api
restart: on-failure
dba:
container_name: dba
image: postgres
expose:
- 5432
ports:
- '5431:5432'
env_file:
- ./api/db.env
In my Node container, I'm waiting for the Node server to spin up and attempting to connect to the database in the other container like so:
const { Client } = require('pg')
const server = app.listen(app.get('port'), async () => {
console.log('App running...');
const client = new Client({
user: 'db-user',
host: 'dba', // host is set to the service name of the DB in the compose file
database: 'db-name',
password: 'db-pass',
port: 5431,
})
try {
await client.connect()
console.log(client) // x - can't see this
client.query('SELECT NOW()', (err, res) => {
console.log(err, res) // x - can't see this
client.end()
})
console.log('test') // x - can't see this
} catch (e) {
console.log(e) // x - also can't see this
}
});
After reading up on it today in depth, I've seen the DB host in the connection code above can't be localhost as that refers to the container which is currently running, so it must be set to the service name of the container we're connecting to (dba in this case). I've also mapped the ports, and can see the DB is ready accepting connections well before my Node server starts.
However, not only can I not connect to the database from Node, I'm also unable to see any success or error console logs from the try catch. It's as if the connection is not resolving, and doesn't ever time out, but I'm not sure.
I've also seen that the "listen_addresses" needs to be updated so other containers can connect to the Postgres container, but struggling to find out how to do this and test when I can't debug the actual issue due to lack of logs.
Any direction would be appreciated, thanks.
You are setting the container name and can reference that container by it. For example,
db:
container_name: container_db
And host:port
DB_URL: container_db:5432
This is a part of my node app
app.configure('development', function() {
app.set('db-uri', 'mongodb://localhost/nodepad-development');
app.use(express.errorHandler({ dumpExceptions: true }));
app.set('view options', {
pretty: true
});
});
app.configure('test', function() {
app.set('db-uri', 'mongodb://localhost/nodepad-test');
app.set('view options', {
pretty: true
});
});
app.configure('production', function() {
app.set('db-uri', 'mongodb://localhost/nodepad-production');
});
Edited to
app.set('db-uri', 'mongodb://mongoDB:27017/nodepad-development');
Still the same error.
I have already created container for my node app which runs on local host but I am unable to connect it to another mongo container due to which I cannot do POST requests to the app.
This is my docker compose file
version: '3'
services:
apptest:
container_name: apptest
restart: always
image: ekamzf/nodeapp:1.1
ports:
- '8080:8080
depends_on:
- mongoDB
mongoDB:
container_name: mongoDB
image: mongo
volumes:
- ./data:/usr/src/app
ports:
- '27017:27017'
And the error I get when I try to register account details in my node app is
Error: Timeout POST /users
at null._onTimeout (/usr/src/app/node_modules/connect-timeout/index.js:12:22)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
What am I missing?
Basically, how should I connect this type of code with mongodb anyway ?
What's the role of 'db-uri'?
Docker Compose aliases service names to host names.
Because you named your container running the mongo image as mongoDB, mongodb (case insensitive) will be the name of the host and the name by which you should refer to it from your Node.JS container and app.
Replace localhost in the URI with mongodb
The MongoDB database defaults to port 27017. Unless you've changed the port, you should specify this value.
Add the port to URI so that you have mongodb:27017
Optional but good practice, refactor the app to use environment variables rather than hard-coded values.
This has at least 2 benefits:
a. Your code becomes more flexible;
b. Your Compose file, by then specifying these values, will be clearer.
See the DockerHub documentation for the image here
See MongoDB"s documentation on connection strings here
A Google search returns many examples using Compose, MongoDB and Node.JS
Update: repro
I'm confident your issue may be related to the timing of the Compose containers. I believe your app tries to connect to the DB before the DB container is ready. This is a common issue with Compose and is not solved using Compose's depends_on. Instead you must find a MongoDB (or perhaps other database) solution to this.
In my repro, index.js (see below) introduces a false delay before it tries to connect to the database. 5 seconds is sufficient time for the DB container to be ready and thus this works:
docker-compose build --no-cache
docker-compose up
Then:
docker-compose logs app
Attaching to 60359441_app_1
app_1 | URL: mongodb://mongodb:27017/example
app_1 | Connected
yields Connected which is good.
Alternatively, to prove the timing issue, you may run the containers separately (and you could remove the sleep function to ensure the database is ready before the app:
HOST=localhost # No DNS naming
PORT=37017 # An arbitrary port to prove the point
# In one session
docker run \
--interactive --tty --rm \
--publish=${PORT}:27017 \
mongo
# In another session
docker run \
--interactive --tty --rm \
--net=host \
--env=HOST=${HOST} --env=PORT=${PORT} --env=DATA=example --env=WAIT=0 \
app
URL: mongodb://localhost:37017/example
Connected
docker-compose.yaml:
version: "3"
services:
app:
image: app
build:
context: ./app
dockerfile: Dockerfile
environment:
- HOST=mongodb
- PORT=27017
- DATA=example
- WAIT=5000
volumes:
- ${PWD}/app:/app
mongodb:
image: mongo
restart: always
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_SERVER: mongodb
# ME_CONFIG_MONGODB_ADMINUSERNAME: root
# ME_CONFIG_MONGODB_ADMINPASSWORD: example
NB Because I followed your naming of mongodb, the mongo-express container must be provided this host name through ME_CONFIG_MONGODB_SERVER
NB The other environment variables shown with mongo and mongo-express images are the defaults and thus optional.
Dockerfile:
FROM node:13.8.0-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["node","index.js"]
index.js:
// Obtain config from the environment
const HOST = process.env.HOST;
const PORT = process.env.PORT;
const DATA = process.env.DATA;
const WAIT = parseInt(process.env.WAIT, 10);
// Create MongoDB client
var MongoClient = require("mongodb").MongoClient;
let url = `mongodb://${HOST}:${PORT}/${DATA}`;
console.log(`URL: ${url}`);
// Artificially delay the code
setTimeout(function() {
MongoClient.connect(url, function(err, db) {
if(!err) {
console.log("Connected");
}
});
}, WAIT);
NB index.js uses the environment (HOST,PORT,DB) for its config which is good practice.
Including mongo-express provides the ability to browse the mongo server to readily observe what's going on:
I'm new to MEAN stack development and was wondering whats the ideal way to spin an mongo+express environment.
Running synchronous bash script commands make the mongo server stop further execution and listen for connections. What would be a local + docker compatible script to initiate the environment ?
Many people use docker-compose for a situation like this. You can set up a docker-compose configuration file where you describe services that you would like to run. Each service defines a docker image. In your case, you could have mongodb, your express app and your angular app defined as services. Then, you can launch the whole stack with docker-compose up.
A sample docker-compose config file would look something like:
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: angular-client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forewarding
express: #name of the second service
build: express-server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forewarding
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forewarding
which comes from an article here: https://scotch.io/tutorials/create-a-mean-app-with-angular-2-and-docker-compose