How to get docker mapped ports from node.js application? - node.js

I would like to get mapped port from inside node.js application.
ex.
docker-compose:
my-app:
build:
context: ./my-app
ports:
- "80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- private
docker ps -a:
1234 my-app "/usr/local/bin/dock…" About a minute ago Up About a minute 0.0.0.0:33962->80/tcp
And I would like to get 33962 port from node.js app.
Any ideas?

So the first thing is that currently, docker doesn't give the metadata information inside the container in any direct manner. There are workarounds to the same though
Now passing a docker socket inside your container is something that you can live with when it concerns a development environment but not prod environment
So what you want to do is build your own metadata service and then let it give the information back to your container without exposing the docker socket itself.
Now to get this metadata you can use two approaches
Metadata service based on container id
In this case, you create a simple endpoint like /metadata/container_id and then return the data you want.
This service can run on the main host itself or on the container which has the docker socket mapped.
Now in your NodeJS service, you will get the current container ID and then use the same to call this service. This can be done in different ways as listed in below thread
Docker, how to get container information from within the container?
The data returned you can restrict, in case I only want ports to be exposed, I will only return the detail of the ports only.
The problem with this approach is that you are still letting the service figure out its own container id and also at the same time trusting that it is sending the correct container id
Metadata service without passing any information
To get metadata information without passing information, we need to identify which pad made a call to the metadata server
This can be identified by using the source IP. Then we need to identify which container belongs to this IP. Once we have the container ID, we can return the metadata.
Below is a simple bash script that does the same
#!/bin/bash
CONTAINER_IP=$SOCAT_PEERADDR
CONTAINER_ID=$(docker ps -q | xargs docker inspect -f '{{.Id}}|{{range .NetworkSettings.Networks}}{{.IPAddress}}|{{end}}' | grep "|$CONTAINER_IP|" | cut -d '|' -f 1)
echo -e "HTTP/1.1 200 OK\r\nContent-Type: application/json;\r\nServer: $SOCAT_SOCKADDR:$SOCAT_SOCKPORT\r\nClient: $SOCAT_PEERADDR:$SOCAT_PEERPORT\r\nConnection: close\r\n";
METADATA=$(docker inspect -f '{{ json .NetworkSettings.Ports }}' $CONTAINER_ID)
echo $METADATA
Now to convert this to a web server we can use socat.
sudo socat -T 1 TCP-L:10081,pktinfo,reuseaddr,fork EXEC:./return_metadata.sh
Now inside the container when we call this metadata server
root#a9cf6dabdfb4:/# curl 192.168.33.100:10081
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"8080"}]}
and the docker ps for the same
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9cf6dabdfb4 nginx "nginx -g 'daemon of…" 3 hours ago Up 3 hours 0.0.0.0:8080->80/tcp nginx
Now how you create such a metadata server is upto you. You can use any approach
socat approach I showed
build one in nodejs
build one using https://github.com/avleen/bashttpd
build one using openresty
You can also add the IP of the service using extra_hosts in your docker-compose.yml and make the call like curl metaserver:10081
This should be a decently secure approach, without compromising on what you need

You can use docker port for this.
docker port my-app 80
That will include the listener IP. If you need to strip that off, you can do that with the shell:
docker port my-app 80 | cut -f2 -d:
Unfortunately, this will only work with access to the docker socket. I wouldn't recommend mounting that socket for just this level of access inside your container.
Typically, most people solve this by passing a variable to their app and controlling what port is published in their docker-compose file. E.g.:
my-app:
build:
context: ./my-app
ports:
- "8080:80"
environment:
- PUBLISHED_PORT=8080
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- private
Then the app would look at the environment variable to reconfigure itself.

I used a dockerode library for this.
Solution:
const Docker = require( 'dockerode' );
const os = require( 'os' );
const docker = new Docker( { socketPath: '/tmp/docker.sock' } );
const container = docker.getContainer( os.hostname() );
container.inspect( function( error, data ) {
if ( error ) {
return null;
}
const mappedPort = data.NetworkSettings.Ports[ '80/tcp' ][ 0 ].HostPort
} );

What about using shelljs?
const shelljs = require('shelljs');
const output = shelljs.exec('docker ps --format {{.Ports}}', {silent: true}).stdout.trim();
console.log(output); // 80/tcp
// 19999/tcp
// 9000/tcp
// 8080/tcp, 50000/tcp
The output is a string, now let's map the response.
const shelljs = require('shelljs');
const output = shelljs.exec('docker ps --format {{.Ports}}', {silent: true}).stdout.trim();
const ports = output.split('\n').map(portList => portList.split(',').map(port => Number(port.split('/')[0].trim())));
This way ports return an array of arrays containing the ports:
[ [ 80 ], [ 19999 ], [ 9000 ], [ 8080, 50000 ] ]
In your case, you want the number between : and ->. So you can do this:
const shelljs = require('shelljs');
const output = shelljs
.exec('docker ps --format {{.Ports}}', { silent: true })
.stdout.trim();
const aoa = output.split('\n').map(portList => portList.split(','));
console.log(aoa); // [ [ '9000/tcp', ' 0.0.0.0:9000->123/tcp ' ], [ ' 80/tcp' ] ]
let ports = [];
aoa.forEach(arr => {
arr.forEach(p => {
// match strings which contains :PORT-> pattern;
const match = p.match(/:(\d+)->/);
if (match) {
ports.push(Number(match[1]));
}
});
});
console.log(ports); // [ 9000 ]
Finally, you need to install docker inside your container and connect docker socks as explained here.
Update 29/06/2019:
As said by #Tarun Lalwani, if sharing docker socks is a problem, you may create an app which shares the network with your main app and has a GET method that returns ports.

Related

How to pass env variable to a json file when executing the docker run command

I'm executing below docker run command to run my nodejs based docker container
docker run -p 8080:7000 --env db_url=10.155.30.13 automation:v1.0.3
And i'm trying to access this env variable by using separate config file from my container. config file is in json format as below.
{
"db_host": "${process.env.db_url}",
}
And in my nodejs code, i'm accessing this db_host value to add the host IP to the listener. But when the above code executed, the docker container is brings down as soon as it brought up. But if i replace the json file value as below, it is working fine and my container is listening as below. Could someone please help me to pass the value and to access it within my json file?
{
"db_host": "10.155.30.13",
}
You can get value in app
const db_host = process.env.db_url || "10.155.30.13"
instead of reading it from json file.
You can not substitute environment variable in JSON file, you can use dotenv or config that will help to have some default value in the config file and override these value from environment variables.
create default config vi config/default.json
{
"db_host": "10.155.30.13"
}
now read from ENV first, else pick the default value from config
app.js
const config = require('config');
const dbConfig = process.env.DB_HOST || config.get('db_host');
console.log(dbConfig)
Now run the docker container
build the container
docker build -t app .
run the container
docker run -it app
console output
10.155.30.13
Now we want to override this default values
docker run -e DB_HOST=192.168.0.1 -it app
console output
192.168.0.1

pass filePath to dockerfile as variable _ nodeJS dockerode Docker

In my case, I am creating a config.json that I need to copy from the host to my container.
I figured out there is some option that I can pass args to my dockerfile.
so first step is :
1.create Dockerfile:
FROM golang
WORKDIR /go/src/app
COPY . . /* here we have /foo directory */
COPY $CONFIG_PATH ./foo/
EXPOSE $PORT
CMD ["./foo/run", "-config", "./foo/config.json"]
as you can see, I have 2 variable [ "$CONFIG_PATH", "$PORT"].
so these to variables are dynamic and comes from my command in docker run.
here I need to copy my config file from my host to my container, and I need to run my project with that config.json file.
after building image:
second step:
get my config file from user and run the docker image with these variables.
let configFilePath = '/home/baazz/baaaf/config.json'
let port = "8080"
docker.run('my_image', null, process.stdout, { Env: [`$CONFIG_PATH=${configFilePath}`, `$PORT=${port}`] }).then(data => {
}).catch(err => { console.log(err) })
I am getting this error message when I am trying to execute my code.
Error opening JSON configuration (./foo/config.json): open
./foo/config.json: no such file or directory . Terminating.
You generally don’t want to COPY configuration files like this in your Docker image. You should be able to docker run the same image in multiple environments without modification.
Instead, you can use the docker run -v option to inject the correct config file when you run the image:
docker run -v $PWD/config-dev.json:/go/src/app/foo/config.json my_image
(The Dockerode home page shows an equivalent Binds option. In Docker Compose, this goes into the per-container volumes:. There’s no requirement that the two paths or file names match.)
Since file paths like this become part of the external interface to how people run your container, you generally don’t want to make them configurable at build time. Pick a fixed path and document that that’s the place to mount your custom config file.

Conditionally detecting whether a Node server is running inside a Docker Container

I have my node.js code where I establish mongodb connections like this: mongodb://localhost:27017/mycollection
Now, I put my server in one container and db in another container and I am able to connect to my db from the server like this: mongodb://mycontainer:27017/mycollection
I have this connection string configured in my server code/config.
Now, how do I detect whether a person is running the server in a container or not and accordingly take the connection string for the db?
If he is running it in the host machine, I want to use the first connection string with localhost and connect to the db in the host machine and if he connects through a container, I want to use the container link name to connect as mentioned in the second case.
Is there any way to do this?
Personally, when I want to accomplish that, I set an ENV variable in the Dockerfile like the following:
ENV DATABASE_HOST db
You can have the full documentation on the Dockerfile reference.
Then, in your Node.js code source, you need to know whether the DATABASE_HOST is set or not (I can redirect you to this Stack Overflow Jayesh's post: Read environment variables in Node.js):
var dbHost = 'localhost';
if (process.env.DATABASE_HOST) {
dbHost = process.env.DATABASE_HOST;
}
or in one line:
var dbHost = process.env.DATABASE_HOST || 'localhost';
Then, for MongoDB connection:
var mongodbConnection = 'mongodb://' + dbHost + ':27017/mycollection'
Now, when you run the container, you must link the container in the docker run command with --link <your mongodb container>:db (since db is the value set in the ENV variable).
But, you can also use the option -e DATABASE_HOST=<somthing else> (again with the docker run command) and use a MongoDB container under another name: -e DATABASE_HOST=anotherOne --link mongo:anotherOne.
And again, you can use an external MongoDB without linking any container if you want (which is not in another container maybe): -e DATABASE_HOST=www.mymongo.com.
EDIT: This solution is maybe better than just identifying if the application is run in a Docker container because with this one your code is usable anywhere.
is-docker is a popular npm packages to accomplish this.
import isDocker from 'is-docker';
if (isDocker()) {
console.log('Running inside a Docker container');
}
The purpose of me using the dependency is perhaps for those who are trying to determine which host to use on their database.
import isDocker from "is-docker";
const host = !!isDocker() ? "host.docker.internal" : env.NODE_DB_HOST;

kafka-python producer not able to send when run in a docker container

I am using kafka-python (pip install kafka-python) in a Flask application to send messages to a Kafka cluster (running version 0.11). The application is deployed to AWS elastic beanstalk via docker. However, I see no messages reaching Kafka (verified that with a console consumer).
I don't know much about docker except how to connect to a running container. So that's what I did. I logged into the beanstalk instance and then connected to the docker container. There, I ran the following commands in Python3.
>> from kafka import KafkaProducer
>> p = KafkaProducer(bootstrap_servers='my_kafka_servers:9092', compression_type='gzip')
>> r = p.send(topic = 'my_kafka_topic', value = 'message from docker', key = 'docker1')
>> r.succeeded()
>> False
>> p.flush()
>> r.succeeded()
>> False
>> p.close()
>> r.succeeded()
>> False
All this while, I had a console consumer running listening to that topic but I saw no messages come through.
I did the same exercise "outside" the docker container (i.e., in the beanstalk instance). I first installed kafka-python using pip. Then ran the following in python3.
>> from kafka import KafkaProducer
>> p = KafkaProducer(bootstrap_servers='my_kafka_servers:9092', compression_type='gzip')
>> r = p.send(topic = 'my_kafka_topic', value = 'message outside the docker', key = 'instance1')
>> r.succeeded()
>> False
# waited a second or two
>> r.succeeded()
>> True
This time, I did see the message come through the console consumer.
So, my questions are:
Why is docker blocking the kafka producer's sends?
How can I fix this?
Is this something for which I need to post the docker configuration? I didn't set it up and so don't have that info.
EDIT
I found some docker configuration specific info in the project.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<NAME>:<VERSION>",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
],
"Logging": "/var/eb_log"
}
You will have to bind your docker container to local machine. This can be done by using docker run as:
docker run --rm -p 127.0.0.1:2181:2181 -p 127.0.0.1:9092:9092 -p 127.0.0.1:8081:8081 ....
Alternatively you can use docker run with bind IP:
docker run --rm -p 0.0.0.0:2181:2181 -p 0.0.0.0:9092:9092 -p 0.0.0.0:8081:8081 .....
If you want to make docker container routable on your network you can use:
docker run --rm -p <private-IP>:2181:2181 -p <private-IP>:9092:9092 -p <private-IP>:8081:8081 ....
Or finally you can go for not containerising your network interface by using:
docker run --rm -p 2181:2181 -p 9092:9092 -p 8081:8081 --net host ....
If you want to bind ports on Elastic Beanstalk and Docker you'll need to use Version 2 which only works with multi container environment. I am having the same issue as above and curious if the fix above works.

Why is the Node.js AWS-SDK returning the wrong SQS queue URL when creating a local queue

I'm using ElasticMQ to simulate AWS SQS on my local dev machine. I'm running ElasticMQ inside a Docker container, using docker-osx-dev to host the Docker container on a Linux VM. This means that I access the local ElasticMQ instance at the IP of the VM, not my localhost IP.
When I try and create a queue in EMQ using the code below, it returns a queue URL at localhost and not the IP of the VM hosting the docker container.
var AWS = require('aws-sdk');
var config = {
endpoint: new AWS.Endpoint('http://192.168.59.103:9324'),
accessKeyId: 'na',
secretAccessKey: 'na',
region: 'us-west-2'
}
var sqs = new AWS.SQS(config);
var params = {
QueueName: 'test_queue'
};
sqs.createQueue(params, function(err, data) {
if (err) {
console.log(err);
} else {
console.log(data.QueueUrl);
}
});
Currently this code returns: http://localhost:9324/queue/test_queue, but it should return http://192.168.59.103:9324/queue/test_queue. If I replace 'localhost' in the URL with the actual IP address, I can access the queue with that URL successfully, indicating that it was indeed created, but this is a pretty nasty hack. What do I need to change in the code above to correct this issue?
Update: Invalid Endpoint
I returned to this after another issue came up using an ElasticMQ simulation container. This time it was part of a docker-compose file and another container was accessing it by it's docker hostname. SQS will not accept host names with underscores in them. This will mess with most peoples docker-compose files as underscores are common. If you get an error message about invalid endpoints, try renaming your container in the compose file with a hyphen instead of an underscore (i.e. http://sqs_local:9324 will fail, http://sqs-local:9324 will be fine).
The problem you're facing it that ElasticMq is exposing the host and port of the container it's running in. This is a problem I've faced before and fixed it by creating a custom Docker container that lets you set the host and port programmatically.
I've written a blog post on how to fix it, but in essence what you need to do is:
Use my custom image tddmonkey/elasticmq
Set the NODE_HOST and NODE_PORT environment variables when you create the container.
Ultimately your command line for this is:
$ docker run -e NODE_HOST=`docker-machine ip default` -e NODE_PORT=8000 -p 8000:9324 tddmonkey/elasticmq

Resources