AWS Node.js multiple apps / urls - node.js

Is it possible to run multiple node apps on multiple domains on a single AWS EC2 instance?
If so, what kind of stack would you need?

The easiest way is to use nginx and set up virtual hosts and configure your nodejs instances to use different ports.
Once you know the ports of your app you can configure different domains on those ports.
https://www.digitalocean.com/community/tutorials/how-to-host-multiple-node-js-applications-on-a-single-vps-with-nginx-forever-and-crontab
Here's how to do that, the guide is for digitalocean but applies to EC2, since it's like a real machine anyway.

Related

Hosting multiple .net core application on Ubuntu using Nginx

I created sample web api on .net core and registered it on default file in Nginx and was able to access it from outside.
The API looked like https://<>/api/values.
Now I want to add more configurations to host more web api with different port number. The problem is how will default file differentiate between multiple APIs since base URL is same i.e localhost\<> for all.
You need to create server blocks. Each of these server blocks will handle/listen/respond to different app. You can host as many apps you want to on a single Ubuntu machine using nginx this way.
This will be very helpful and describe the entire process of creating server blocks for your nginx server.

Deploy a MEAN stack application to an existing server

I have a Ubuntu Server on DigitalOcean which hosts a website, and a Windows Server on AWS which hosts another website.
I just built a mean.js stack app on my MAC, and I plan to deploy it to production.
It seems that most of the existing threads discuss about using a new dedicated server. For example, this thread is about deploying on a new AWS EC2 instance; this video is about deploying on a new Windows Azure server; this is to create a new droplet in DigitalOcean.
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server? If yes, will there be any difference in terms of performance?
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server?
Yes. Both Windows and Ubuntu allows you to deploy multiple applications on same instance.
For Ubuntu you can read this post which will help you server multiple apps.
In this example used Nginx, but you can follow to this example and use it without any server like Apache or Nginx. If you need subdomains I would suggest to use Apache virtual hosts with reverse proxy module and pm2
For Windows and its IIS I would suggest to use iisnode, in google you can find a lot of articles how to configure it.
will there be any difference in terms of performance?
It is depended on your applications, if you are already serving applications which handles huge traffic and need CPU and memory, I would not suggest you to use multiple apps on same instance, but if you are going to use simple web apps, you can easily use same instance.
Hope this answer will help you!

Scaling a two-tier MEAN application with Docker on AWS

So my current set-up is a Node.js application that serves up an Angular front-end, a second Node.js application that has Express and serves as an API, and a MongoDB instance. Simply, the client-side app talks to the back-end app and the back-end app talks to MongoDB.
I was looking into how to Dockerize these applications and it seems like some examples use linking. So my question is does linking only work on the same host (meaning a single EC2 instance on AWS) or multiple EC2 instances? If only the former and if I have both apps and Mongo containerized on one instance, how do I scale out? Like if I spin up a second EC2 instance, would I put both containerized Node apps and Mongo again on that second instance? Is having a Mongo container on the same instance with the Node apps a single point of failure? How is that fault tolerant?
Just trying to wrap my head around this and apologize for my ignorance on the subject. Thanks!
You should put each app as well as the MongoDB server in separate containers (which is what I think you intend) and the linking (via Docker-Compose or other method) is just networking. If you use Docker links, it creates a private network. You can create other networks to talk to each other, also to a LAN, WAN, whatever.
Yes, putting them all on the same EC2 instance is creating a SPOF.
If that's a concern, look into: https://docs.docker.com/swarm/networking/
Docker Swarm is fully compatible with Docker’s networking features.
This includes the multi-host networking feature which allows creation
of custom container networks that span multiple Docker hosts.
Or load balance your apps and use an AWS-hosted MongoDB cluster. There are many possible approaches based on your needs and budget.

IIS (Win2012) on EC2 with auto scaling

We are looking at moving around 100 websites that we have on a dedicated web server, from our current hosting company; and host these sites on a EC2 Windows 2012 server.
I've looked at the type of EC2 instances available. Am I better going for a m1.small (or t1.micro with auto scaling). With regards auto scaling, how does it work, if I upload a file to the master instance, when are the other instances updated ? Is it when the instances are auto scaled again ?
Also, I will be needing to host a mail enable (mail server) application. Any thoughts on best practice for this ? Am I better off hosting 1 server for everything, or splitting it across instances...?
When you are working with EC2, you need to start thinking about how your applications are designed and deployed differently.
Autoscaling works best when your instances follow shared nothing architecture. The instances themselves should never store persistent data. They should also be able to be automatically set up at launch.
Some applications are not designed to work in this environment. They require local file storage, or other issues.
You probably wont be using micro instances. They are mostly designed for very specific low utilization workloads.
You can run a mail server on ec2, but you will have to use an Elastic IP and whitelist the instances sending mail. By default, EC2 instances are on the spamhaus block list.

Expressjs to production

I am new to expressjs, I want to deploy an expressjs app to production. Based on my googling, here's the setup on rackspace I am thinking:
1 Load balancer + 2 server + Run app with forever
My questions are:
What engine shall I use to run the app? nginx?
how many app can I run per server?
Thank you.
If you are serving static files or using any of nginx's reverse proxy features, you can use nginx. But if not, since your servers are behind a load balancer, nginx isn't necessary at all.
The rule of thumb is one node.js/express.js process per core. Have a look at cluster to help you manage this. Make sure your load balancer knows about all the node.js processes you are running (and is not just load balancing between one IP/port pair on each server).
Update: Node.js now has cluster built in out of the box.
Also, if you are deploying on Ubuntu you can use upstart instead of forever if you like.
You need nodejs installed on your machine to run nodejs. nginx is a server used for reverse proxy and a load balancer. Also you can run the app through pm2 instead of forever which will handle all the clustering and running your app in background.

Resources