Dev environment for multiple server setup - Nodejs - node.js

This is my first time building out something with multiple servers. I wanted to know if anyone could point me towards a guide for setting up a dev environment (windows) for a backend that will be set up on multiple servers ie one server for the API, one for another set of processes (ie file compression) and one for everything else.
Again, just trying to figure out if it's possible to set up a dev environment to test out the system on my local machine.
Thanks

You almost certainly want to run virtual machines (on something like VMWare or VirtualBox) to really test multi-machine stuff. However, I also develop for multiple machines every day (we have an array of app servers, an array of background worker servers, e-commerce servers, cache stores and front proxies—and I still just develop on one virtual machine that has all that stuff running on it. Provided you make hostnames and ports configurable for everything, there's not much difference between localhost port 9000 and some.server.tld port 8080. Actually running all the VMs on a single computer would likely be painful, both in terms of system resources and complexity.
There are tools to help with setting up VMs with similar or the same configurations too. Take a look at http://vagrantup.com/ and also http://babushka.me/.
Just my $0.02.

Related

Making multiple Docker Machines accessible across local network. Linux & Mac

I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.

Setting up a web server for access outside of subnetwork (Node.js, Nginx maybe, Ubuntu server)

A little bit of context. I have developped a webapp on node.js (and a glamourous set of extensions). It has been approved for testing with true users at my company and i am supposed to deploy it now. Problem is that basically i have no idea unto how attack this problem. I have so many questions.
For the moment i have created a virtual machine on the local server. I have installed ubuntu server unto it and i have the intuition about how to deploy the app in this part (i suppose following the same steps as when i started to work on this project). I do not know however if i can have remote access from the outside of my network to this virtual machine. I also dont know if additional configuration in ubuntu's side is needed to make such an idea work (for example: in the installation there was a part about proxies that at the moment i decided to ignore)
From the few documents i have read about it since i was assigned this, a solution may lie in using nginx. The logic behind it if i am not mistaken (and please correct me if i am) is that nginx can help linking the HTTP requests (through the port 80 which is normally opened for access in most machines) and link it to a specific port on the machine (The sexy app i have developped).
In a more early stage, what ressources would i need to start this off? Would i need a domain name? IS it necessary? Do i need a different virtual server to link the apps or can they be on the same machine?
If you have additional comments or tips for someone that is learning to do this kind of thing, please do.
For remote access, you will need a couple of things. First of all, you will need to make sure that your virtual machine is on a bridged adapter. I'm not sure what virtual machine you are on, or I'd give you more detail on how to do this. Second, you will need to make sure that your router has port 80 (or whatever port you chose to use) setup via port forwarding so that requests coming in map to the server (a request comes to the router on the port, the router must then know where to send those requests to). Finally, if you want to use a port other than port 80, you should be able to configure this in the nodejs configuration. This may also be configurable in the router so that requests coming in on port 80 are mapped to, say 8080, but, given that this is a company, it's probably easier to reconfigure the nodejs server than have it set up special mapping.
This experience comes from personal experience with hosting web servers at home. Corporate routers should need similar configuration unless each system has a public IP address on the internet, which is unlikely.

Transition Virtual Hosts to Docker Containers

I currently run a Red Hat Linux server with Plesk to host a hundred or so domains. For multiple reasons I'd like to transition away from Plesk and to Docker containers with each virtual host as one or more containers. I'm unclear from what I've read so far what would be the best approach to this.
A typical site includes the doc root file area and one or two MySQL databases. We run PHP on all the sites. Some sites may have constraints on the version of PHP they can run. Some of the sites use SSL. I don't believe there are any constraints on the MySQL versions, but it's of course possible that future MySQL versions could deprecate some feature that is needed. I don't believe there's any dependency on the Apache version, but I do rely on some specific Apache modules being installed. There may be a site or two that have dependencies outside of their doc root and not part of the basic virtual host setup, but I don't believe any require a specific version of Linux.
I would like the containers to have maximum portability so that I can have flexibility in moving sites to whatever server or cloud service I choose. Part of my goal is to retire the current server and move sites to servers which best fit them.
I would also like to try upgrading the PHP version after the containers are created.
So would a single container include the entire doc root file system, including the data directories where users can upload/ftp files? Would it include the MySQL database, or would that be separate? I assume I would include the current version of PHP so that I could upgrade each one when I was ready. Would it include Apache when specific Apache modules are required? Is there a reason to include Apache and/or MySQL in all containers?
One last piece. I'm looking into using CoreOS which utilizes Docker as an integral part.
Any and all inputs are appreciated.
The whole idea of Docker is running processes/components isolated, to keep them easily upgradable. I have tinkered with this in the past and have come up with the following.
Create four containers per instance (customer):
Apache or nginx
php-fpm
MySQL
Busybox (as a data container)
Link all of them together and set volumes to all data that should persist in the data container. MySQL data and /var/www plus site config files for example.
This way you can always switch out one of the components while keeping the others. It's questionable though if Docker is a solution to a full virtual server though, as Docker containers do not have a full init system and you'll have to resort to bending things quite a bit to resemble a full virtual machine. Think more of it as "application containers", hence the idea with the separation of concerns.
Update:
Newer Docker versions come with the docker-compose tool which greatly eases this task.
I am trying to solve the same issues with cPanel instead of Plesk.
We can try and accomplish this using the plugins for cpanel or plesk however we have to worry about few things.
and we have to create some premade template images for containers that our clients can use.. it cannot be just any container from dockerhub,user Dockerfiles,etc Because cPanel/Plesk will look for specific log files available on specifc locations for bw calculations, disk quota,etc.
Biggest advantage with this solution is that we can provide CloudLinux kind of isolation and easy resource allocation/ fair sharing. However it is not as easy.
To answer your question:
Every container will be nearly a complete system so you will need to have less clients per host, because each container might be like 1G and by default have to run its own webserver/php and hence more ram foot print.
Its painful to run a Mysql inside each container and it is better to use mysql on the host or 1 dedicated container and share it. this way the Plesk's tools will help.
You may also have to use the standard apache and then reverse proxy it to each container after ssl termination so Plesk's standard tools are used but then I think containers will have to run its own webserver itself or we may have to do some trickery with php-fpm to allow host's apache to talk to each container's php-fpm processes . This is more painful than allowing each container to just run its own Nginx but possible.
It doesnt prevent users from installing their own Mysql server within their container if they need.
This kind of stuff is easy for someone from cPanel or Plesk to do.. but for others it will need a lot of Dedicated development time + testing to make sure all this works.
I was going to invest some time in creating this kind of plugin for cPanel but still undediced. I may try this if I can rope in some investors.
You can see amount of interest , CPanel shows on this issue : http://features.cpanel.net/responses/dockerio-support
I will leave you to decide
Also as an alternative solution:
so Instead of playing to the Cpanel's tune I created this . https://github.com/paimpozhil/WhatPanel
Here every site runs in its own container ( and its own VM if needed.).
Migration is simple as exporting/importing a container with tools like : on github.com /paimpozhil/docker-volume-backup & acaranta/docker-backuper
I didnt complete the migrator/ php upgrade tools ,etc here but will do when i have free time.

Separate service on NodeJS server

I want to know how to structure my NodeJS server.
I want to separate services proposed on my website to mount cluster in the future and to have many servers (each allowed to one special task).
Example :
The 'main' server which have one project : ExpressJS and Database
The 'communication server' which have one project : Chat + Forum
Others projects : For complex computing (generating chart / stats / emailing)
Could you explain me different approach for this type of complex website ?
Like Benjamin Gruenbaym said, the architecture belongs somewhere else.
If you are wondering about how to setup the applications on an individual server, there are a few things to keep in mind.
NodeJS runs in a single process, so it should ideally take up 1 core of the CPU. If you run a database on the same server, that is another core. So it may be fine to host all node applications on the same server, if it has a sufficient number of cores.
To run two different Node processes on the same machine, you simply start them one after another, but make sure that they listen on different ports.
To make sure that you can scale out your application later, it is important that you use domain names, instead of IP adresses when you identify your services to each other. So the nodeJS app should know about the database as mydatabase.mycompany.com, not as 192.168.1.10 or any other ip address. This will allow you to later move the database to another network address or to use a load balancer.

Virtualization: Do you store web content inside the virtual machines?

In creating virtual web servers (VirtualBox for me), does web content usually go "inside" the virtual machine or is there usually a file server running on bare metal serving files (content) to all the virtual machines?
Is there a typical scenario for storing content for virtual servers?
It depends is this development or production? If production with multiple servers store everything on a file share with fast network access so you don't have to deal with the hassle of updating every server. If it's development just put everything on the virtual server since performance isn't critical and it's simpler.
What are you creating the virtual servers for?
I usually use vms as testing machines, in which case the content will go "inside" (where apache is running).
If you are trying to simulate a networked system (like a load balancer or something), you will set up the content on the simulated machine that represents the file server in your scenario.
It depends on your architecture and how the information is going to be updated. If the information is going to be updated frequently and replicated across multiple machines, it may make sense to have a file server on bare metal that the VMs connect to over the network. However, you should be aware that with VirtualBox in particular you're going to take a pretty significant hit on network performance even with the guest tools installed. I've noticed I get about 20% of the network speed of VMWare from VirtualBox.

Resources