Creating an environmental variable within Docker container when starting up - node.js

How would I get the ip address of a mongo container and set it as environmental variable when creating a node image?
I've been running into an issue with conflicting tech stacks: keystone.js, forever, and docker. My problem is that I need to set up an environmental variable for a separate mongo container which would seem easy to do by running a shell script when I start up the container that includes:
export MONGO_URI="mongodb://${MONGODB_PORT_27017_TCP_ADDR}:27017/(db_name)"
The issue comes with starting the keystone app. Normally I would place it in the same script and call it with docker run but this project we need to use forever. Command would be forever keystone.js. There is an issue with this in that the docker container drops immediatly. If I start the app with a simple forever start rather than going to the script the app starts up fine but the env variable needed is not set. It's hard coded in the docker image but of course this is not a good solution as the ip of the mongodb may change in the future and then on the node container restart it would not be able to find the db. See a couple of possibilities:
Switch to just using a node keystone.js, would loose the functionality of the forever start (which will restart the app if there is a critical failure). Tested and this works but maybe someone knows a way to make forever work or a viable alternate?
Find a way to set the above export from the docker file when creating the image. Haven't been able to get this to work but I do know the name that the mongdb is going to use no matter what if that helps
Any help is most appreciated.

The best way is to use docker link this provides you a hostname + your environmental variables.
docker run ... --link mongodb:mongodb ..
Also you can use the command line option from run
docker run -e MONGO_URI="mongodb://${MONGODB_PORT_27017_TCP_ADDR}:27017/(db_name)"
An Option for dynamic dns would be SkyDNS + SkyDock.

Related

How do I create a directory in a Docker container that won't start?

I have a Docker container (not image) that crashes when I try to start it. The Docker logs show that it is failing because and Apache2 conf file can't find a directory (/var/www/html/log/ - this is the result of me trying to get SSL setup and forgot to create this directory after I referenced it in the 000-default.conf file and restarted Apache).
How do I create this directory in the container without having to start the container itself?
You have 4.5 options that comes to my mind:
You can rebuild the image and set up the directory while doing it.
You can attach a volume while starting the image, but in this case your changes will remain in your disk and not in your container.
You can run the image overriding the entry point with --entrypoint="bash" or something. You need to do it with -ti flag so that it begins in interactive mode. Then make your changes and run docker commit -p <container> <image:tag> -p pauses container while commiting. I recommend this unless it absolutely needs to be running.
I am not sure if this one works so I give half point :P but if it does this would be the fastest option actually. You can start the container in interactive mode with docker start -i container which would attach a terminal. And if you have time until container exits or read that part of configuration, you can create the folder.
Ah finally, I have just remembered, you should be able to move files and folders from your file system to container using docker cp [container:]<source> [container:]<destination> even while container is not running.
In general, if you're using a base Docker image for Apache (for example, httpd/2.4/Dockerfile), it should already have "/var/www/html/log".
SUGGESTION 1: Please make sure you're starting with a "good" base image.
SUGGESTION 2: Add "mkdir -p /var/www/html/log" to your Dockerfile, and rebuild the image.
I'm not sure how you're using your image - what you want the image to contain besides Apache - but:
SUGGESTION 3: Google for a simple tutorial that matches your use case, and see what steps you might be "missing". For example: Dockerize your Laravel Application

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Live reload Node.js dev environment with Docker

I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]

Docker NodeJs for ReactJs

I'm trying to create a container for my ReactJs project with nodejs. At first, it seems to be success, but am not able to access it through the port.
Following is my Dockerfile
package.json
Command I ran
docker build -t <your username>/node-reactjs .
docker run -p 9090:3333 -d <your username>/node-reactjs
As you can see, the container created successfully
But
I even tried to go inside the container and curl localhost:3333, it did return me the js file
I tried googling around, and many ways but seems like cant make it to work. I even tried the docker-compose way, but even worse. Cant even create the container.
Would really appreciate if someone can help me out on this.
Btw, is this the correct way to do for ReactJs?
Thanks.
After some hard time, I finally found the way to get it to work.
Thanks to this Connecting webpack-dev-server inside a Docker container from the host
All I need to do is to add a parameter to the start script in the package.json as following
Noticed the
--host 0.0.0.0
That's what missing.

How do I leave Node.js server on EC2 running forever?

As you can tell by my question, I'm new to this...
I built my first website, I set up my first Node.js server to serve it and then pushed everything live on EC2.
I tested everything on my EC2 IP address and everything seems to be working.
Now up until now, I've been testing my app locally so it makes sense that whenever I closed the terminal, app.js would stop running so nothing would be served on localhost.
Now that my server is on EC2, the same thing happens ("obviously" one could say..) whenever I close my terminal.
So my question is how do I keep my Node.js server running on EC2 for like... forever..so that my site stays live.. forever :)
I read something about a node module called "forever" but I'm wondering (being new and all..) why isn't this "forever" functionality a default setting of the Node.js-EC2 system ?
I mean, correct me if I'm wrong, but isn't the whole point of setting up a web server and pushing it live to have it stay live forever? Isn't that what servers are supposed to do anyway (infinitely listening for requests) ? And if that's the case why do we need extra modules/settings to achieve that ?
Thanks for your help.. As you can tell I'm not only looking for a solution but an explanation as well because I got really confused.. :-)
EDIT (a few details you might need) - After installing my app on EC2 these are the steps that I follow on the terminal (The app is running on Amazon Linux by the way) :
I type ssh -i xxxxxxxxxxx.pem ec2-user#ec2-xx-xx-xx-x.eu-west-1.compute.amazonaws.com on the
terminal
After logging onto the Amazon machine I then go to the relevant folder and execute node app.js
There are 3 folders in the machine : node, node_modules and *name of my app*
app.js resides in *name of my app*
After that, the site goes live on my EC2 IP
Once I close the terminal, everything is switched off
Before you invoke Node.js, run the command:
screen
This will create a persistent environment which will allow your process to keep running after you disconnect.
When you reconnect, you can use this command to reconnect to that environment:
screen -r
Here's a random link to learn more about screen:
http://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/
However, this won't help you if your EC2 instance restarts. There are many different ways to do that. Adding your startup command to /etc/rc.local is one way. Here's a link to an Amazon guide which includes adding something to /etc/rc.local.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/building-shared-amis.html
I worked with the valid answer for a while but some times the screen just end with no reason also screen has no balance loader and others features that in a production enviroment you should care , Currently I use a npm component to do this job.
https://www.npmjs.com/package/pm2
This is so easy to use.
$ npm install pm2 -g
then just start your app with pm2 like this
$ pm2 start app.js
In the above link you can find diferents tasks to perform if you need.
Hope this help the newbies like me.
There's a better way. Use forever.js.
See it here: https://github.com/foreverjs/forever
This is a nice tutorial for how to use chkconfig with forever on CENTOS.
http://aronduby.com/starting-node-forever-scripts-at-boot-w-centos/
Or use tmux
Just Enter a tmux screen run node server
Ctrl+b Hit D and you're done.
I am very late to join the thread and seems its basic problem with every newbie. Follow the below to setup properly your first server.
follow the step on the ec2 instance(before doing this make sure you have a start script for pm2 in your package.json file):
npm install pm2 -g
pm2 startup systemd
See the output and at the last line it must be like..
You have to run this command as root. Execute the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup
systemd -u sammy --hp /home/sammy
Take the last line command and run again with root privilege.
(before running the next command, Provide a new start script for pm2 in your package.json file e.g: "pm2-start": "pm2 start ./bin/www")
npm run pm2-start
for more info follow the link.
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04
If you are using a Ubuntu EC2, better to use the following we have been using this for the past 6 years and have had no issues with this.
sudo npm i -g forever
Now start your main, example
forever start index.js
forever start src/server.js
To stop the server use the following command
forever stop index.js
To list multiple servers running forever
forever listall

Resources