Azure CosmosDB emulator unstable when using it for api tests - azure

We are using docker-compose to run some api tests. In the background, the API performs CRUD operations on a cosmsodb. The test run is supposed to run without creating and using a real cosmosdb, so i use the cosmosdb emulator as a docker image.
version: "3.7"
services:
cosmosdb:
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
container_name: cosmosdb
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
healthcheck:
test:
[
"CMD",
"curl",
"-f",
"-k",
"https://localhost:8081/_explorer/emulator.pem",
]
interval: 10s
timeout: 1s
retries: 5
ports:
- "8081:8081"
init:
build:
context: init
depends_on:
cosmosdb:
condition: service_healthy
The script even has a loop to see if the db is ready, before writing anything. It works roughly like this:
const client = new CosmosClient({
endpoint: `https://cosmosdb:8081/`,
key: 'C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==',
connectionPolicy: { requestTimeout: 10000 },
});
async function isDbReady(): Promise<boolean> {
try {
await client.databases.readAll().fetchAll();
return true;
} catch (err) {
console.log('database not ready', err.message);
return false;
}
}
async function waitForDb(): Promise<void> {
while(!await isDbReady()) {
await new Promise((resolve) => setTimeout(resolve, 10000));
}
}
the problem, that we have is, when our script (javascript with #azure/cosmos) is trying to create a database, a couple of collections and then inserting a couple of items, the cosmosdb sometimes (about 20% of the tests) will just stop responding and run into timeouts. This will persist until we run docker-compose down and rerun docker-compose up for the next try.
We run a slightly modified version of the image, where we just installed curl for the healthcheck to run (the same issue happens when directly using mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator which is why i simple added that image to the docker compose snippet). We use the healthcheck for cosmosdb emulator as suggested here: How to check if the Cosmos DB emulator in a Docker container is done booting?
Defining a docker-compose volume and mounting it as /tmp/cosmos/appdata also won't improve the situation.
We are also not sure how to set AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE as we actually would like to start each test run with a clean database and have our script insert data.
How can we get the cosmosdb emulator to be more stable?

Related

Using Docker Compose to connect separate Postgres and Node containers together

I'm trying to use Docker Compose to connect a Node.js container to a Postgres container. I can run the Node server fine, and can also connect to the Postgres container fine from my local as I've mapped the ports, but I'm unable to get the Node container to connect to the database.
Here's the compose YAML file:
version: "3"
services:
api_dev:
build: ./api
command: sh -c "sleep 5; npm run dev" # I added the sleep in to wait for the DB to load but it doesn't work either way with or without this sleep part
container_name: pc-api-dev
depends_on:
- dba
links:
- dba
ports:
- 8001:3001
volumes:
- ./api:/home/app/api
- /home/app/api/node_modules
working_dir: /home/app/api
restart: on-failure
dba:
container_name: dba
image: postgres
expose:
- 5432
ports:
- '5431:5432'
env_file:
- ./api/db.env
In my Node container, I'm waiting for the Node server to spin up and attempting to connect to the database in the other container like so:
const { Client } = require('pg')
const server = app.listen(app.get('port'), async () => {
console.log('App running...');
const client = new Client({
user: 'db-user',
host: 'dba', // host is set to the service name of the DB in the compose file
database: 'db-name',
password: 'db-pass',
port: 5431,
})
try {
await client.connect()
console.log(client) // x - can't see this
client.query('SELECT NOW()', (err, res) => {
console.log(err, res) // x - can't see this
client.end()
})
console.log('test') // x - can't see this
} catch (e) {
console.log(e) // x - also can't see this
}
});
After reading up on it today in depth, I've seen the DB host in the connection code above can't be localhost as that refers to the container which is currently running, so it must be set to the service name of the container we're connecting to (dba in this case). I've also mapped the ports, and can see the DB is ready accepting connections well before my Node server starts.
However, not only can I not connect to the database from Node, I'm also unable to see any success or error console logs from the try catch. It's as if the connection is not resolving, and doesn't ever time out, but I'm not sure.
I've also seen that the "listen_addresses" needs to be updated so other containers can connect to the Postgres container, but struggling to find out how to do this and test when I can't debug the actual issue due to lack of logs.
Any direction would be appreciated, thanks.
You are setting the container name and can reference that container by it. For example,
db:
container_name: container_db
And host:port
DB_URL: container_db:5432

Cannot connect Node to mongoDB in docker container on first try

I'm trying to dockerize Node.js application which connects to MongoDB using mongoose. It succeeds anytime I run node index.js from the shell when the connection URL is: mongodb://localhost:27017/deposit.
If I restart my computer and then try to run the dockerized project (with mongo instead of localhost in url) with the command docker-compose up it fails to connect to MongoDB. But after I try again the same command, then it succeeds.
So my question is why node cannot connect to MongoDB on first try after the computer is restarted?
PS. Docker is running when I'm trying it
connection.js
const mongoose = require('mongoose');
const connection = "mongodb://mongo:27017/deposit";
const connectDb = () => {
mongoose.connect(connection, {useNewUrlParser: true, useUnifiedTopology: true}).then(res => console.log("Connected to DB"))
.catch(err => console.log('>> Failed to connect to MongoDB, retrying...'));
};
module.exports = connectDb;
Dockerfile
FROM node:latest
RUN mkdir -p /app
WORKDIR /app
#/usr/src/app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 7500
# ENTRYPOINT ["node"]
CMD ["node", "src/index.js"]
docker-compose.yml
version: "3"
services:
deposit:
container_name: deposit
image: test/deposit
restart: always
build: .
network_mode: host
ports:
- "7500:7500"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /data:/data/db
network_mode: host
ports:
- '27017:27017'
In your case, node application starts before mongo being ready. There are two approaches to tackle this problem: To handle it in your docker-compose or your application.
You can use wait-for-it.sh or write a wrapper script (both described here) to make sure that your node application starts after the db is ready.
But as quoted from docker documentation, it is better to handle this in your application:
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason
You can implement mongo retry as below (Described in this answer):
var connectWithRetry = function() {
return mongoose.connect(mongoUrl, function(err) {
if (err) {
console.error('Failed to connect to mongo on startup - retrying in 5 sec', err);
setTimeout(connectWithRetry, 5000);
}
});
};
connectWithRetry();
As mentioned in the comment there might be the possibility the DB is not yet ready to accept the connection, so one way is to add retry logic or the other option is
serverSelectionTimeoutMS -
With useUnifiedTopology, the MongoDB driver will try to find a server
to send any given operation to, and keep retrying for
serverSelectionTimeoutMS milliseconds. If not set, the MongoDB driver
defaults to using 30000 (30 seconds).
So try with below option
const mongoose = require('mongoose');
const uri = 'mongodb://mongo:27017/deposit?retryWrites=true&w=majority';
mongoose.connect(uri, {
useNewUrlParser: true,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 50000
}).catch(err => console.log(err.reason));
But again if you init DB script getting bigger it will take more time, so you go with retry logic if that did not work. in the above script it will wait for 50 seconds.

How can I spin up multiple node js services inside of a single docker compose?

So what I want to do is to have 2 docker container running(MongoDB and Postgres) both of these are working. Now what I want to do is to create 2 services(node.js instances) that will connect from separate containers that will query and exercise their database. Each service should only access a single database.
To show my docker-compose.yml
services:
#mongodb set up
web:
image: node
build: ./mongodb_service
ports:
- 4000:4000
depends_on:
- mongodb
mongodb:
image: mongo
expose:
- '27017'
web_postgres:
image: node
build: ./postgres_service
ports:
- "3000:3000"
depends_on:
- postgresdb
#postgres db definition the env vars should be put in a properties file and passed to it somehow
postgresdb:
image: postgres
#the health check confirms that the db is ready to accept connections and not just the container
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
environment:
- PGUSER=user
- PGPASSWORD=1234
- PGPORT=5432
- PGDATABASE=customerdirectory
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
Currently what is breaking is my web_postgres service. When I run docker-compose up I notice that the console.log() commands from my web service end up in the web_postgres Here is what I mean web_postgres error. At the top of the image, my web_postgres_1 service is printing stuff from web service.
Is my docker-compose.yml set up correctly to have these containers run independently?
If there is something kind of missing piece please lmk will upload.
Thanks a lot for all your help.
This is the server.js file from my web_postgres service.
const express = require("express");
const { Pool, Client } = require('pg')
const app = express()
// pools will use environment variables
// for connection information
const pool = new Pool({
user: 'user',
host: 'localhost',
database: 'customerdirectory',
password: '1234',
port: 5432,
})
pool.query('SELECT NOW()', (err, res) => {
console.log(err, res)
pool.end()
})
app.get("/", (req, res) => {
res.send("Hello from Node.js app \n");
});
app.listen('3000', () =>{
console.log("I hate this damn thingy sometimes")
})
This is the web service server.js code that gets printed
const express = require("express");
var MongoClient = require('mongodb').MongoClient;
const app = express();
MongoClient.connect("mongodb://mongodb:27017", function(err, db) {
if(err) throw err;
db.close()
})
app.get("/", (req, res) => {
res.send("Hello from Node.js app \n");
});
app.listen('4000', () =>{
console.log("I hate this damn thingy sometimes")
})
You need to remove the image: from your docker-compose.yml file. When you specify that both containers have the same image:, Docker Compose takes you at face value and uses the same image for both, even if there are separate build: instructions.
The documentation for image: notes:
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
So what happens here is that Compose builds the first service's image, and tags it as node:latest (potentially conflicting with a standard Docker Hub image with this same name); and when the second container starts, it sees that image already exists, so it reuses it.
You can demonstrate this with a simple setup. Create a new empty directory with subdirectories a and b. Create a/Dockerfile:
FROM busybox
WORKDIR /app
RUN echo Hello from a > hello.txt
EXPOSE 80
CMD ["httpd", "-f"]
Similarly create b/Dockerfile with a different string.
Now create this docker-compose.yml file:
version: '3'
services:
a:
# image: image
build: a
ports: ['8000:80']
b:
# image: image
build: b
ports: ['8001:80']
You can start this as-is, and curl http://localhost:8000/hello.txt and similarly for port 8001 to see the two different responses. But, if you uncomment the two image: lines, they'll both return the same string. There are also some clues in the Docker Compose output where one path builds both images but the other only builds one, and there's a warning about the named image not existing when you specify the image names.

docker stack with mssql and node, how to properly connect to the db?

I'm trying to build a stack with two containers as a first step, one with the app, one with a MS SQL server. Using no stack, and a container with the SQL server and the app locally works fine, but I can't manage to figure out the proper way to make the containerised app to connect to the DB.
My stack file is as follows :
version: "3.4"
services:
db:
image: orizon/training-library-sql
ports:
- 1443:1443
networks:
- backend
app:
image: orizon/training-library
ports:
- 4000:4000
networks:
- backend
depends_on:
- db
links:
- db:db
deploy:
replicas: 1
networks:
backend:
Db image is based on microsoft/mssql-server-linux:2017-latest and works fine when the app is not in a container and use 'localhost' as hostname.
In the node app, the mssql config is the following:
const config = {
user: '<username>',
password: '<password>',
server: 'db',
database: 'library',
options: {
encrypt: false // Use this if you're on Windows Azure
}
};
And the message I received from node app container :
2018-09-07T10:11:57.404Z app ConnectionError: Failed to connect to db:1433 - getaddrinfo ENOTFOUND db
EDIT
Simplified my stackfile and the connectivity now kind of works.
links seems deprecated and replaced with depends_on
version: "3.4"
services:
db:
image: orizon/training-library-sql
ports:
- 1443:1443
app:
image: orizon/training-library
ports:
- 4000:4000
depends_on:
- db
deploy:
replicas: 1
Now the error message changed and let me think it's more of a kind of delay issue. Database container seems like it needs a bit more time to get ready before popping up the app container.
I guess I'm now looking for means to delay connecting to the database either through docker or by code.
Finally made it work properly.
See OP for much simpler and effective stack file.
In addition, I added a retry strategy in my app code to let time for the MS SQL server to properly start in the container.
function connectWithRetry() {
return sql.connect(config, (err) => {
if (err) {
debug(`Connection to DB failed, retry in 5s (${chalk.gray(err.message)})`);
sql.close();
setTimeout(connectWithRetry, 5000);
} else {
debug('Connection to DB is now ready...');
}
});
}
connectWithRetry();
Docker documentation shows a parameter that should answer this (sequential_deployment: true) but docker stackdoesn't allow its usage. Docker documentation itself advise to either manage this issue by code or by adding a delay script.

Wait node.js until Logstash is ready using containers

I have the following setup in docker-compose.yml:
....
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.4
node:
container_name: node
build:
context: ./node/
dockerfile: ./Dockerfile
depends_on:
- logstash
....
I'm using the package winston-logstash to wire them together.
This is the transport layer:
const logstashHost = process.env.LOGSTASH_HOST || 'logstash'
const logstashPort = process.env.LOGSTASH_PORT || 5045
new (winstonLogstash.Logstash)({
host: logstashHost,
port: logstashPort,
node_name: 'node',
timestamp: true,
max_connect_retries: 5,
timeout_connect_retries: 1000,
})
And the pipeline configuration:
input {
tcp {
port => 5045
}
}
output {
stdout{}
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
Using docker-compose up results in Error: Max retries reached, transport in silent mode, OFFLINE
If I manually start the server either using a large setTimeout or incrementing the number of connection retries it finally works. It works too if I start logstash and after a while I start node container.
The problem is that obviously this is not a good practice, I can't guess how long logstash will take to start, and the depends_on directive inside docker-compose.yml doesn't help at all.
I need a way to know when logstash is ready and start the node container after that.
Docker compose does not wait until a container is ready, it will only wait until it's running.
depends_on will only ensure that logstash launches before your node container, but again, this doesn't mean it will wait until it is ready.
You can either handle the checks yourself on node, or use a wrapper script. In docker-compose documentation, they recommend, wait-for-it or dockerize
You can read more on this in here
Custom wrapper
Your node container command can change from node index.js (or whatever you have), to bash wait-for-logtash.sh:
#!/bin/bash
## Or whatever command is used for checking logstash availability
until curl 'http://logstash:5045' 2> /dev/null; do
echo "Waiting for logtash..."
sleep 1;
done
# Start your server
node index.js

Resources