Docker Help : Creating Dockerfile and Image for Node.js App - node.js

I am new docker and followed the tutorials on docker's website for installing boot2docker locally and building my own images for Node apps using their tutorial (https://docs.docker.com/examples/nodejs_web_app/). I was able to successfully complete this but I have the following questions:
(1) Should I be using these Node Docker images (https://registry.hub.docker.com/_/node/) instead of CentOS6 for the base of my Docker Image? I am guessing the Docker tutorial is out of date?
(2) If I should be basing from the Node Docker Images, does anyone have any thoughts on whether the Slim vs Regular Official Node Image is better to use. I would assume slim would be the best choice but I am confused on why multiple versions exist.
(3) I don't want my Docker Images to include my Node.JS app source files directly in the image and thus have to re-create my images on every commit. Instead I want running my Docker Container to pull the source from my private Git Repository upon starting for a specific commit. Is this possible? Could I use something like entrypoint to specify my credentials and commit when running the Docker Container so it then would run a shell script to pull the code and then start the node app?
(4) I may end up running multiple different Docker Containers on the same EC2 hosts. I imagine making sure the containers are all based off of the same Linux distro would be preferred? This would prohibit me from downloading multiple versions when first starting the instance and running the different containers?
Thanks!

It would have been best to ask 4 separate questions rather than put this all into one question. But:
1) Yes, use the Node image.
2) The "regular" image includes various development libraries that aren't in the slim image. Use the regular image if you need these libraries, otherwise use slim. More information on the libraries is here https://registry.hub.docker.com/_/buildpack-deps/
3) You would probably be better off by putting the code into a data-container that you add to the container with --volumes-from. You can find more information on this technique here: https://docs.docker.com/userguide/dockervolumes/
4) I don't understand this question. Note that amazon now have a container offering: https://aws.amazon.com/ecs/

Related

How do i create a custom docker image?

I have to deploy my application software which is a linux based package (.bin) file on a VM instance. As per system requirements, it needs minimum 8vCPUs and 32GB RAM.
Now, i was wondering if it is possible to deploy this software over multiple containers that load share the CPU and RAM power in the kubernetes cluster, rather than installing the software on a single VM instance.
is it possible?
Yes, it's possible to achieve that.
You can start using docker compose to build your customs docker images and then build your applications quickly.
First, I'll show you my GitHub docker-compose repo, you can inspect the folders, they are separated by applications or servers, so, one docker-compose.yml build the app, only you must run a command docker-compose up -d
if you need to create a custom image with docker you should use this docker command docker build -t <user_docker>/<image_name> <path_of_files>
<user_docker> = your docker user
<image_name> = the image name that you choose
<path_of_files> = somelocal path, if you need to build in the same folder you should use . (dot)
So, after that, you can upload this image to Dockerhub using the following commands.
You must login with your credentials
docker login
You can check your images using the following command
docker images
Upload the image to DockerHub registry
docker push <user_docker>/<image_name>
Once the image was uploaded, you can use it in different projects, make sure to make the image lightweight and usefully
Second, I'll show a similar repo but this one has a k8s configuration into the folder called k8s. This configuration was made for Google cloud but I think you can analyze it and learn how you can start in your new project.
The Nginx service was replaced by ingress service ingress-service.yml and https certificate was added certificate.yml and issuer.yml files
If you need dockerize dbs, make sure the db is lightweight, you need to make a persistent volume using PersistentVolumeClaim (database-persistent-volume-claim.yml file) or if you use larger data onit you must use a dedicated db server or some db service in the cloud.
I hope this information will be useful to you.
There are two ways to achieve what you want to do. The first one is to write a dockerfile and create the image. More information about how to write a dockerfile can be found from here. Apart for that you can create a container from a base image and install all the software and packages and export it as a image. Then you can upload to a docker image repo like Docker Registry and Amazon ECR

How to create Docker-image from my app? It's node.js application that use MongoDB

I have node-applicatiot. It's running from terminal and do some operations. Operations include work with database "MongoDB".
I need to write dockerfile that create Docker-image from my app.
I read many information, but examples that I found tell about how to create web-app that running on some port. I need just run app from terminal.
What steps I must do?
unfortunately your question is rather vague and I am not really familiar with nodeJs, but as far as my understanding of Docker goes I will try and point you in some direction where you might find the information you are looking for or at least help you to ask a more specific question.
From what I can tell what you are looking for is probably a docker image that contains a nodeJs-Server. Then you would download that image and create a container from it, you would map the folder containing your node-application as a volume into the container, so the node-Server inside the container can deploy your application.
At our company we use a JBoss-Application Server inside a Docker-Container to deploy our Java-Applications in pretty much the way I described. Maybe you should look for some of the words I used on Google (docker volume, docker container, etc.), there are actually lots of documentation and tutorials on docker out there.
A good way to start would probably be here https://docs.docker.com/ ;)

Ship docker image as an OVF

I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.

Do I first need docker environment before starting my project?

I am going to work with Node.js and PostgreSQL on Linux. I read many hours about how docker actually works. Still I am not sure that is docker environment needed before starting my project or I can use docker after completion of the project?
Lets first understand what docker is and how you can use it in your project.
Docker have three core concepts:
1) Docker engine : a lightweight runtime and robust tooling that builds and runs your Docker containers.
2) Docker image : a carbon copy of your project environment including all environment dependencies like base operating system, host entries, environment variables, databases, web/application servers. In your case, Linux distribution of your choice, node.js and required modules, PostreSQL and it's configuration.
3) docker container : can be visualized as an virtual Linux server running your project. Each time you use docker run, a new container is launched from the docker image.
You can visualize a docker-environment as an lightweight virtual machine where you can run your project without any external interference(host entries/environment variables/ RAM/ CPU) from other projects.
So as a developer, you can develop your project on your Dev machine and once it's ready to be pushed to QA/Staging you can build a docker image of your project which then can be deployed on any environment(QA/Staging/Production).
You can launch multiple container from your image on single or multiple physical servers.
You can introduce Docker whenever you want. If using multiple servers then you can create a Docker container with one server in it and the other (non-Dockerised solution) makes requests to that.
Or you could Dockerise them both.
Basically, introduce Docker when you feel the time is right.
I like to divide a large project into multiple sections - e.g. front end web sever, backend authentication server, backend API server 1, backend API server 2, etc.
As each part of the project gets completed, I Dockerise it. The other parts then use the Dockerised solution.

Using Docker to build an image for Node, my Express based Node app, MongoDb, and NodeBB, connected via Passport

I've just been introduced to Docker and the concept is awesome. I've found simple Dockerfiles for building an image for MongoDB and Node and was wondering, do I just combine those images together to make one image that has my project which is a combination of a custom Node app (built on Express), a NodeBB forum, backed by MongoDB, all wired together with Passport providing single-sign-on. Or should I make them all separate Images.
Can a Docker image contain its own VPN with the various services running on different VMs?
Docker does not have a standardized way to package and provision applications consisting of multiple images, so if you want to share your application, it's probably best to put everything into a single Dockerfile. Having said that, if sharing your application isn't a huge priority using multiple Docker images may be easier to maintain (plus you'll be able to use other MongoDB images). You could then use something like Fig (http://orchardup.github.io/fig/) to orchestrate the entire application.
As for communication between Docker containers, Docker has two options: enabling all communication across containers (this is the default), or disabling all communication except for those specified. You can enable the second option by passing the flag "--icc=false" to the Docker daemon. Afterwards, you'll need to explicitly "expose" and "link" containers in order for them to communicate. The relevant documentation can be found here.

Resources