I have my node.js code where I establish mongodb connections like this: mongodb://localhost:27017/mycollection
Now, I put my server in one container and db in another container and I am able to connect to my db from the server like this: mongodb://mycontainer:27017/mycollection
I have this connection string configured in my server code/config.
Now, how do I detect whether a person is running the server in a container or not and accordingly take the connection string for the db?
If he is running it in the host machine, I want to use the first connection string with localhost and connect to the db in the host machine and if he connects through a container, I want to use the container link name to connect as mentioned in the second case.
Is there any way to do this?
Personally, when I want to accomplish that, I set an ENV variable in the Dockerfile like the following:
ENV DATABASE_HOST db
You can have the full documentation on the Dockerfile reference.
Then, in your Node.js code source, you need to know whether the DATABASE_HOST is set or not (I can redirect you to this Stack Overflow Jayesh's post: Read environment variables in Node.js):
var dbHost = 'localhost';
if (process.env.DATABASE_HOST) {
dbHost = process.env.DATABASE_HOST;
}
or in one line:
var dbHost = process.env.DATABASE_HOST || 'localhost';
Then, for MongoDB connection:
var mongodbConnection = 'mongodb://' + dbHost + ':27017/mycollection'
Now, when you run the container, you must link the container in the docker run command with --link <your mongodb container>:db (since db is the value set in the ENV variable).
But, you can also use the option -e DATABASE_HOST=<somthing else> (again with the docker run command) and use a MongoDB container under another name: -e DATABASE_HOST=anotherOne --link mongo:anotherOne.
And again, you can use an external MongoDB without linking any container if you want (which is not in another container maybe): -e DATABASE_HOST=www.mymongo.com.
EDIT: This solution is maybe better than just identifying if the application is run in a Docker container because with this one your code is usable anywhere.
is-docker is a popular npm packages to accomplish this.
import isDocker from 'is-docker';
if (isDocker()) {
console.log('Running inside a Docker container');
}
The purpose of me using the dependency is perhaps for those who are trying to determine which host to use on their database.
import isDocker from "is-docker";
const host = !!isDocker() ? "host.docker.internal" : env.NODE_DB_HOST;
Related
I'm building a node.js app which allows people to run code on my server and I'm using Docker to containerise the user's code so that it can't steal data or in general do something they shouldn't. I have a Docker image template that is copied into the user's personal app directory and I want to build the image using this function I've written:
const util = require("util");
const exec = util.promisify(require("child_process").exec);
async function buildContainer(path, dockerUser) {
return await exec(`sudo docker build -t user_app_${dockerUser} ${path}`);
}
However when I go to use it, it requires me to enter my sudo password as if I was executing it manually in a terminal window.
Is there anyway I can run this function without having to include the sudo keyword?
Thanks in advance.
you can use podman instead of docker.
There you don´t need sudo.
You have the most commands like docker.
example:
podman build
podman run
and so on...
hope that helps :)
Regards
I'm executing below docker run command to run my nodejs based docker container
docker run -p 8080:7000 --env db_url=10.155.30.13 automation:v1.0.3
And i'm trying to access this env variable by using separate config file from my container. config file is in json format as below.
{
"db_host": "${process.env.db_url}",
}
And in my nodejs code, i'm accessing this db_host value to add the host IP to the listener. But when the above code executed, the docker container is brings down as soon as it brought up. But if i replace the json file value as below, it is working fine and my container is listening as below. Could someone please help me to pass the value and to access it within my json file?
{
"db_host": "10.155.30.13",
}
You can get value in app
const db_host = process.env.db_url || "10.155.30.13"
instead of reading it from json file.
You can not substitute environment variable in JSON file, you can use dotenv or config that will help to have some default value in the config file and override these value from environment variables.
create default config vi config/default.json
{
"db_host": "10.155.30.13"
}
now read from ENV first, else pick the default value from config
app.js
const config = require('config');
const dbConfig = process.env.DB_HOST || config.get('db_host');
console.log(dbConfig)
Now run the docker container
build the container
docker build -t app .
run the container
docker run -it app
console output
10.155.30.13
Now we want to override this default values
docker run -e DB_HOST=192.168.0.1 -it app
console output
192.168.0.1
I would like to get mapped port from inside node.js application.
ex.
docker-compose:
my-app:
build:
context: ./my-app
ports:
- "80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- private
docker ps -a:
1234 my-app "/usr/local/bin/dock…" About a minute ago Up About a minute 0.0.0.0:33962->80/tcp
And I would like to get 33962 port from node.js app.
Any ideas?
So the first thing is that currently, docker doesn't give the metadata information inside the container in any direct manner. There are workarounds to the same though
Now passing a docker socket inside your container is something that you can live with when it concerns a development environment but not prod environment
So what you want to do is build your own metadata service and then let it give the information back to your container without exposing the docker socket itself.
Now to get this metadata you can use two approaches
Metadata service based on container id
In this case, you create a simple endpoint like /metadata/container_id and then return the data you want.
This service can run on the main host itself or on the container which has the docker socket mapped.
Now in your NodeJS service, you will get the current container ID and then use the same to call this service. This can be done in different ways as listed in below thread
Docker, how to get container information from within the container?
The data returned you can restrict, in case I only want ports to be exposed, I will only return the detail of the ports only.
The problem with this approach is that you are still letting the service figure out its own container id and also at the same time trusting that it is sending the correct container id
Metadata service without passing any information
To get metadata information without passing information, we need to identify which pad made a call to the metadata server
This can be identified by using the source IP. Then we need to identify which container belongs to this IP. Once we have the container ID, we can return the metadata.
Below is a simple bash script that does the same
#!/bin/bash
CONTAINER_IP=$SOCAT_PEERADDR
CONTAINER_ID=$(docker ps -q | xargs docker inspect -f '{{.Id}}|{{range .NetworkSettings.Networks}}{{.IPAddress}}|{{end}}' | grep "|$CONTAINER_IP|" | cut -d '|' -f 1)
echo -e "HTTP/1.1 200 OK\r\nContent-Type: application/json;\r\nServer: $SOCAT_SOCKADDR:$SOCAT_SOCKPORT\r\nClient: $SOCAT_PEERADDR:$SOCAT_PEERPORT\r\nConnection: close\r\n";
METADATA=$(docker inspect -f '{{ json .NetworkSettings.Ports }}' $CONTAINER_ID)
echo $METADATA
Now to convert this to a web server we can use socat.
sudo socat -T 1 TCP-L:10081,pktinfo,reuseaddr,fork EXEC:./return_metadata.sh
Now inside the container when we call this metadata server
root#a9cf6dabdfb4:/# curl 192.168.33.100:10081
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"8080"}]}
and the docker ps for the same
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9cf6dabdfb4 nginx "nginx -g 'daemon of…" 3 hours ago Up 3 hours 0.0.0.0:8080->80/tcp nginx
Now how you create such a metadata server is upto you. You can use any approach
socat approach I showed
build one in nodejs
build one using https://github.com/avleen/bashttpd
build one using openresty
You can also add the IP of the service using extra_hosts in your docker-compose.yml and make the call like curl metaserver:10081
This should be a decently secure approach, without compromising on what you need
You can use docker port for this.
docker port my-app 80
That will include the listener IP. If you need to strip that off, you can do that with the shell:
docker port my-app 80 | cut -f2 -d:
Unfortunately, this will only work with access to the docker socket. I wouldn't recommend mounting that socket for just this level of access inside your container.
Typically, most people solve this by passing a variable to their app and controlling what port is published in their docker-compose file. E.g.:
my-app:
build:
context: ./my-app
ports:
- "8080:80"
environment:
- PUBLISHED_PORT=8080
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- private
Then the app would look at the environment variable to reconfigure itself.
I used a dockerode library for this.
Solution:
const Docker = require( 'dockerode' );
const os = require( 'os' );
const docker = new Docker( { socketPath: '/tmp/docker.sock' } );
const container = docker.getContainer( os.hostname() );
container.inspect( function( error, data ) {
if ( error ) {
return null;
}
const mappedPort = data.NetworkSettings.Ports[ '80/tcp' ][ 0 ].HostPort
} );
What about using shelljs?
const shelljs = require('shelljs');
const output = shelljs.exec('docker ps --format {{.Ports}}', {silent: true}).stdout.trim();
console.log(output); // 80/tcp
// 19999/tcp
// 9000/tcp
// 8080/tcp, 50000/tcp
The output is a string, now let's map the response.
const shelljs = require('shelljs');
const output = shelljs.exec('docker ps --format {{.Ports}}', {silent: true}).stdout.trim();
const ports = output.split('\n').map(portList => portList.split(',').map(port => Number(port.split('/')[0].trim())));
This way ports return an array of arrays containing the ports:
[ [ 80 ], [ 19999 ], [ 9000 ], [ 8080, 50000 ] ]
In your case, you want the number between : and ->. So you can do this:
const shelljs = require('shelljs');
const output = shelljs
.exec('docker ps --format {{.Ports}}', { silent: true })
.stdout.trim();
const aoa = output.split('\n').map(portList => portList.split(','));
console.log(aoa); // [ [ '9000/tcp', ' 0.0.0.0:9000->123/tcp ' ], [ ' 80/tcp' ] ]
let ports = [];
aoa.forEach(arr => {
arr.forEach(p => {
// match strings which contains :PORT-> pattern;
const match = p.match(/:(\d+)->/);
if (match) {
ports.push(Number(match[1]));
}
});
});
console.log(ports); // [ 9000 ]
Finally, you need to install docker inside your container and connect docker socks as explained here.
Update 29/06/2019:
As said by #Tarun Lalwani, if sharing docker socks is a problem, you may create an app which shares the network with your main app and has a GET method that returns ports.
My code looks like below: -
AWS.config.update({ region: 'us-east-1' });
var ec2 = new AWS.EC2();
// Create the EC2 instance
ec2.describeInstances(function (err, data) {
if (err) {
res.status(500).json(err);
} else {
res.status(201).json(data);
}
});
The above code creates the EC2 instance perfectly. Now, my requirement is that I want to "ssh to the created instance" from my NodeJS code programatically. What step should I follow to achieve this. BTW, the whole idea is, once I could ssh to the EC2 instance programatically, the next step I will do is to install Docker and other softwares in that created Instance programatically
Thanks
As long as you have all the necessary information to be able to connect to and authenticate with your EC2 instance via SSH, you could use a module like ssh2 to connect programmatically to execute commands, transfer files, etc.
I think I'm a little late for this question but I also had this problem a few months ago and it really took me a few days to find a solution.
The solution is :
ssh -tt -o StrictHostKeyChecking=no -i "ec2-instance-key.pem" ec2-user#PUBLIC_DNS sh ./shellScript.sh
This line of code connects to the EC2 instance and executes a shell script. You can either run a single shell script which has all the commands you want to execute or execute them via the ssh command.
You'll need the authentication key for the instance , as you can see in the command.
Hope this helps someone someday !
I'm using ElasticMQ to simulate AWS SQS on my local dev machine. I'm running ElasticMQ inside a Docker container, using docker-osx-dev to host the Docker container on a Linux VM. This means that I access the local ElasticMQ instance at the IP of the VM, not my localhost IP.
When I try and create a queue in EMQ using the code below, it returns a queue URL at localhost and not the IP of the VM hosting the docker container.
var AWS = require('aws-sdk');
var config = {
endpoint: new AWS.Endpoint('http://192.168.59.103:9324'),
accessKeyId: 'na',
secretAccessKey: 'na',
region: 'us-west-2'
}
var sqs = new AWS.SQS(config);
var params = {
QueueName: 'test_queue'
};
sqs.createQueue(params, function(err, data) {
if (err) {
console.log(err);
} else {
console.log(data.QueueUrl);
}
});
Currently this code returns: http://localhost:9324/queue/test_queue, but it should return http://192.168.59.103:9324/queue/test_queue. If I replace 'localhost' in the URL with the actual IP address, I can access the queue with that URL successfully, indicating that it was indeed created, but this is a pretty nasty hack. What do I need to change in the code above to correct this issue?
Update: Invalid Endpoint
I returned to this after another issue came up using an ElasticMQ simulation container. This time it was part of a docker-compose file and another container was accessing it by it's docker hostname. SQS will not accept host names with underscores in them. This will mess with most peoples docker-compose files as underscores are common. If you get an error message about invalid endpoints, try renaming your container in the compose file with a hyphen instead of an underscore (i.e. http://sqs_local:9324 will fail, http://sqs-local:9324 will be fine).
The problem you're facing it that ElasticMq is exposing the host and port of the container it's running in. This is a problem I've faced before and fixed it by creating a custom Docker container that lets you set the host and port programmatically.
I've written a blog post on how to fix it, but in essence what you need to do is:
Use my custom image tddmonkey/elasticmq
Set the NODE_HOST and NODE_PORT environment variables when you create the container.
Ultimately your command line for this is:
$ docker run -e NODE_HOST=`docker-machine ip default` -e NODE_PORT=8000 -p 8000:9324 tddmonkey/elasticmq