IIS (Win2012) on EC2 with auto scaling - iis

We are looking at moving around 100 websites that we have on a dedicated web server, from our current hosting company; and host these sites on a EC2 Windows 2012 server.
I've looked at the type of EC2 instances available. Am I better going for a m1.small (or t1.micro with auto scaling). With regards auto scaling, how does it work, if I upload a file to the master instance, when are the other instances updated ? Is it when the instances are auto scaled again ?
Also, I will be needing to host a mail enable (mail server) application. Any thoughts on best practice for this ? Am I better off hosting 1 server for everything, or splitting it across instances...?

When you are working with EC2, you need to start thinking about how your applications are designed and deployed differently.
Autoscaling works best when your instances follow shared nothing architecture. The instances themselves should never store persistent data. They should also be able to be automatically set up at launch.
Some applications are not designed to work in this environment. They require local file storage, or other issues.
You probably wont be using micro instances. They are mostly designed for very specific low utilization workloads.
You can run a mail server on ec2, but you will have to use an Elastic IP and whitelist the instances sending mail. By default, EC2 instances are on the spamhaus block list.

Related

NodeJS TLS/TCP server in need of an external firewall

Problem:
I have an AWS EC2 instance running FreeBSD. In there, I'm running a NodeJS TLS/TCP server. I'd like to create a set of rules (in my NodeJS application) to be able to individually block IP addresses programmatically based on a few logical conditions.
I'd like to run an external (not on the same machine/instance) firewall or load-balancer, that I can control from NodeJS programmatically, such that when certain conditions are given, I can block a specific remote-address(IP) before it reaches the NodeJS instance.
Things I've tried:
I have initially looked into nginx as an option, running it on a second instance, and placing my NodeJS server behind it, but after skimming through the NGINX
Cookbook
Advanced Recipes for High Performance
Load Balancing I've learned that only the NGINX Plus (the paid version) allows for remote/API control & customization. While I believe that paying $3500/license is not too much (considering all NGINX Plus' features), I simply can not afford to buy it at this point in time; in addition the only feature I'd be using (at this point) would be the remote API control and the IP address blocking.
My second thought was to go with the AWS/ELB (elastic-load-balancer) by integrating AWS' SDK into my project. That sounded feasible, unfortunately, after reading a few forum threads and part of their documentation (unless I'm mistaken) it seems these two features I need are not available on the AWS/ELB. AWS seems to offer an entire different service called WAF that I honestly don't understand very well (both as a service and from a feature-stand-point).
I have also (briefly) looked into CloudFlare, as it was recommended in one of the posts, here on Sackoverflow, though I can't really tell if their firewall would allow this level of (remote) control.
Question:
What are my options? What would you guys recommend I did?
I think Nginx provide such kind of functionality please refer to link
If you want to block an IP with Node TCP you can just edit a nginx config file and deny IP address.
Frankly speaking, If I were you, I would use AWS WAF but if you don’t want to use it, you can simply use Node JS
In Node JS You should have a global array variable where you will store all blocked IP addresses and upon connection, you will check whether connected host IP is in blocked IP variable. However there occurs a problem when machine or application is restarted, you will lose all information about blocked IP-s. So as a solution to that you can just setup Redis (It is key-value database but there are also other datatypes) DB and store blocked IP-s there. Inasmuch as Redis DB is in RAM all interaction with DB will be instantly and as long as machine or node is restarted, Redis makes a backup on hard drive and it syncs from it and continue to work in RAM with old databases.

Deploy a MEAN stack application to an existing server

I have a Ubuntu Server on DigitalOcean which hosts a website, and a Windows Server on AWS which hosts another website.
I just built a mean.js stack app on my MAC, and I plan to deploy it to production.
It seems that most of the existing threads discuss about using a new dedicated server. For example, this thread is about deploying on a new AWS EC2 instance; this video is about deploying on a new Windows Azure server; this is to create a new droplet in DigitalOcean.
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server? If yes, will there be any difference in terms of performance?
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server?
Yes. Both Windows and Ubuntu allows you to deploy multiple applications on same instance.
For Ubuntu you can read this post which will help you server multiple apps.
In this example used Nginx, but you can follow to this example and use it without any server like Apache or Nginx. If you need subdomains I would suggest to use Apache virtual hosts with reverse proxy module and pm2
For Windows and its IIS I would suggest to use iisnode, in google you can find a lot of articles how to configure it.
will there be any difference in terms of performance?
It is depended on your applications, if you are already serving applications which handles huge traffic and need CPU and memory, I would not suggest you to use multiple apps on same instance, but if you are going to use simple web apps, you can easily use same instance.
Hope this answer will help you!

Setting up ELB on AWS for Node.js

I have a very basic question but I have been searching the internet for days without finding what I am looking for.
I currently run one instance on AWS.
That instance has my node server and my database on it.
I would like to make use of ELB by separating the one machine that hosts both the server and the database:
One machine that is never terminated, which hosts the database
One machine that runs the basic node server, which as well is never terminated
A policy to deploy (and subsequently terminate) additional EC2 instances that run the server when traffic demands it.
First of all I would like to know if this setup makes sense.
Secondly,
I am very confused about the way this should work in practice:
Do all deployed instances run using the same volume or is a snapshot of the volume is used?
In general, how do I set such a system? Again, I searched the web and all of the tutorials and documentations are so generalized for every case that I cannot seem to figure out exactly what to do in my case.
Any tips? Links? Articles? Videos?
Thank you!
You would have an AutoScaling Group with a minimum size of 1, that is configured to use an AMI based on your NodeJS server. The AutoScaling Group would add/remove instances to the ELB as instances are created and deleted.
EBS volumes can not be attached to more than one instance at a time. If you need a shared disk volume you would need to look into the EFS service.
Yes you need to move your database onto a separate server that is not a member of the AutoScaling Group.

Keeping Multiple Servers in a Cluster In-Sync?

I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster.
Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine.
I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.
I'm dealing with setting up a similar solution. I'm half way there. I would recommend you use lsyncd, which basically monitors the disk for changes and then immediately (or whatever interval you want) automatically syncs files to a list of servers using rsync.
The only issue I'm having is keeping the server lists up to date, since I can spin up additional servers at any time, I would need to have each machine in the cluster notified whenever a machine is added or removed from the cluster.
I think lsyncd is a great solution that you should look into. The issue I'm having may turn out to be a problem for you as well, and that remains to be solved.
Instead of keeping tens or hundreds of servers cross-synchronized it would be much more efficient, reliable, and most of all simple maintaining just one "admin node" and replicating changes from that to all your "worker nodes".
For instance at our company we use a Development server -> Staging server -> Live backends workflow where all the changes are transferred across servers using a custom php+rsync front end. That allows the developers to push updates to a Staging server in the live environment, test out changes, and roll them to Live backends incrementally.
A similar approach could very well work in your case as well. Obviously it's not a plug-and-play solution, but I see it as the easiest way to go - both in terms of maintainability and scalability.

Dev environment for multiple server setup - Nodejs

This is my first time building out something with multiple servers. I wanted to know if anyone could point me towards a guide for setting up a dev environment (windows) for a backend that will be set up on multiple servers ie one server for the API, one for another set of processes (ie file compression) and one for everything else.
Again, just trying to figure out if it's possible to set up a dev environment to test out the system on my local machine.
Thanks
You almost certainly want to run virtual machines (on something like VMWare or VirtualBox) to really test multi-machine stuff. However, I also develop for multiple machines every day (we have an array of app servers, an array of background worker servers, e-commerce servers, cache stores and front proxies—and I still just develop on one virtual machine that has all that stuff running on it. Provided you make hostnames and ports configurable for everything, there's not much difference between localhost port 9000 and some.server.tld port 8080. Actually running all the VMs on a single computer would likely be painful, both in terms of system resources and complexity.
There are tools to help with setting up VMs with similar or the same configurations too. Take a look at http://vagrantup.com/ and also http://babushka.me/.
Just my $0.02.

Resources