I have not used Docker before at all, but I have a flask app running on an Azure server right now which I would like the mostly replicate to another server.
Ubuntu 16.10
Anaconda for my Python environments
A few systemd files to configure nginx and uwsgi
My goal is to start fresh without having to do a fresh install of the OS (I do not have the ability to do this) on my current server. I have a few issues with environments and multiple Python versions which I would like to escape from.
I would then like to take this set up and send it over to another server which is completely fresh (a brand new Azure instance which hasn't been touched yet). Is this possible with Docker?
To make things clear, Docker is not a technolgy to migrate applications from one server to another. Docker is a "vitualization" technology which allows you to isolate applications when they are running. Once you have this isolation, the Docker containers can be migrated to any server having just Docker installed. Thus you releive yourself from issues like "It works on this machine, but it doesn't work on that".
In order to do that, you need first to Dockerize your application. Your requirements are very common, and there are many samples online of how to containarize such applications.
However, you need first to learn about Docker to get started (which need a couple of hours/days). You can start learning about Docker here. Once you have your application dockerized and working on one machine, moving it to another server is a piece of cake.
Related
I have a react + node app which I need to deploy. I am using nginx to serve my front end but I am not sure what to use to keep my nodejs server running in production.
The project is hosted on a windows VM. I cannot use pm2 due to license issues. I have no idea if running the server using nodemon in production is good or not. I have never deployed an app in production, hence I have no idea about appropriate methods.
You may consider forever or supervisor.
Check this blog post on the same.
You can also use docker. You can create multiple docker containers that will run your node server. Now at the nginx level at your host machine you can do load balancing configuration which will route the traffic equally to different docker node containers this will improve your availability and scalability, In heavy traffic you just need to increase the number of docker node containers as and when required. I guess initially 2 containers will be enough to handle traffic (depends on your use case though).
Note:- You can also use forever or supervisor as suggested by #Rajesh Gupta inside your docker containers for running node server. We use PM2 for that.
If you have a database then you can create a separate docker container for the database and map it to a volume in your host machine.
You can learn about docker from here.
Also you can read about load balancing in nginx from here.
Further more to improve your availability you can add a caching layer in between nginx and docker containers. Varnish is the best caching service i have used till date.
PS:- We use a similar but more advanced architecture to run our Ecommerce application that generates 5-10k orders daily. So this is a tested approach with 0 downtime.
Try to dockerize the whole app including the db, caching server (if any) etc.
Here are some examples why:
You can launch a fully capable development environment on any
computer supporting Docker; you don't have to install libraries,
dependencies, download packages, mess with config files etc.
The working environment of the application remains consistent across
the whole workflow. This means the app runs exactly the same for
developer, tester, and client, be it on development, staging or
production server. In short, Docker is the counter-measure for the
age-old response in the software development: "Strange, it works for
me!"
Every application requires a specific working environment: pre-installed applications, dependencies, data bases, everything in specific version. Docker containers allow you to create such environments. Contrary to VM, however, the container doesn't hold the whole operating system—just applications, dependencies, and configuration. This makes Docker containers much lighter and faster than regular VM's.
I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives
I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.
I am going to work with Node.js and PostgreSQL on Linux. I read many hours about how docker actually works. Still I am not sure that is docker environment needed before starting my project or I can use docker after completion of the project?
Lets first understand what docker is and how you can use it in your project.
Docker have three core concepts:
1) Docker engine : a lightweight runtime and robust tooling that builds and runs your Docker containers.
2) Docker image : a carbon copy of your project environment including all environment dependencies like base operating system, host entries, environment variables, databases, web/application servers. In your case, Linux distribution of your choice, node.js and required modules, PostreSQL and it's configuration.
3) docker container : can be visualized as an virtual Linux server running your project. Each time you use docker run, a new container is launched from the docker image.
You can visualize a docker-environment as an lightweight virtual machine where you can run your project without any external interference(host entries/environment variables/ RAM/ CPU) from other projects.
So as a developer, you can develop your project on your Dev machine and once it's ready to be pushed to QA/Staging you can build a docker image of your project which then can be deployed on any environment(QA/Staging/Production).
You can launch multiple container from your image on single or multiple physical servers.
You can introduce Docker whenever you want. If using multiple servers then you can create a Docker container with one server in it and the other (non-Dockerised solution) makes requests to that.
Or you could Dockerise them both.
Basically, introduce Docker when you feel the time is right.
I like to divide a large project into multiple sections - e.g. front end web sever, backend authentication server, backend API server 1, backend API server 2, etc.
As each part of the project gets completed, I Dockerise it. The other parts then use the Dockerised solution.
I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/