How does one deploy multiple micro-services in Node on a single AWS EC2 instance? - node.js

We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance

So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!

It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.

Related

Azure Docker Container Service Multiple Instances

I'm using Azure for the first time to deploy a web app. More specifically, I'm using the Docker Container Service to do this. I already have one instance of it deployed. But, I want to also deploy 2 more instances of the same web app. I want each instance to have a different URL. What is the best way of doing this? Do I have to add a new container service for each new instance and repeat the steps I did for deploying the first instance?
As in case of Azure Container Service (ACS),
You choose a container orchestrator i.e. Swarm, DC\OS and Kubernetes (Because creating replicas works differently in each of them).
In case of Swarm then either create a separate application container for the same application with a different end point and use automatic discovery feature for different URL(s) or choose a reverse-proxy load-balancer such as Nginx which will work on the same URL but on a different port (My blog here might help you).
Creating replicas across a cluster are used for Load Balancing, Routing Mesh, Scaling and Rolling updates etc.

Link containers in Azure Container Service with Mesos & Marathon

I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.

Making locally hosted server accessible ONLY by AWS hosted instances

Our system has 3 main components:
A set of microservices running in AWS that together comprise a webapp.
A very large monolithic application that is hosted within our network, and comprises of several other webapps, and exposes a public API that is consumed by the AWS instances.
A locally hosted (and very large) database.
This all works well in production.
We also have a testing version of the monolith that is inaccessible externally.
I would like to able to spin up any number of copies of the AWS environment for testing or demo purposes that can access the demo testing version of the monolith. However, because it's a test system, it needs to remain inaccessbile to the public. I know how to achieve this with AWS easily enough (security groups etc.), but how can I secure the monolith so it can be accessed ONLY by any number of dynamically created instances running in AWS (given that the IP addresses are dynamic and can therefore not be whitelisted)?
The only idea I have right now is to use an access token, but I'm not sure how secure that is.
Edit - My microservices are each running on an EC2 instance.
Assuming you are running your microservices on EC2, if you want API calls from your application servers running in AWS to come from a known IP/IPs then this can be accomplished by using a NAT instance or a proxy. This way even though your application servers are dynamic, the apparent source of the requests is not.
For a NAT you would run your EC2 instances in a private subnet and configure them to send all of their Internet traffic out over the NAT instance which will have a constant IP. Using a proxy server or fleet of proxy servers can be accomplished in much the same way, but would require your microservice applications be configured to use it.
The better approach would be to simply not send the traffic to your microservices over the public Internet.
This can be accomplished by establishing a VPN from your company network to your VPC. Alternatively, you could establish a Direct Connect to bridge the networks.
Side note, if your microservices are actually running in AWS Lambda then this answer does not apply.

How to deploy a website created using eclipse jee and tomcat in amazon website?

I have created a website using eclipse and tomcat. I want to deploy it in a real web hosting area so that clients can use it. How to do that using aws?!
Amazon Web Services have a number of different tools to deploy applications on their infrastructure. It really depends on the level of control you want and need. Your options are as follows:
Elastic Beanstalk - You can simply upload your code and AWS will handle the entire deployment from capacity provisioning, load balancing, auto-scaling to application health monitoring. You can read more about it here - https://aws.amazon.com/elasticbeanstalk
AWS OpsWorks - You can define the application’s architecture and the specification of each component. You can read more about it here -https://aws.amazon.com/opsworks
There are also other services such as CodeDeploy which could form part of your release cycle. From your question, it sounds like Elastic Beanstalk would be the most suitable. If you have little experience deploying web applications it might be better to look for a managed hosting platform. AWS expects you to have in-depth knowledge of architecting and developing web applications.

Choosing shared Linux AMI machine image for AWS

I know next to nothing about server management and just got started with Amazon Web Services.
I want to deploy a Linux server which runs Apache, MySQL, phpMyAdmin as well as email capabilities (account mgmt and webmail interface) and backup capabilities. I want to administer the server with a nice web user interface like cPanel, doing things like file management, email account management, access to phpMyAdmin.
Therefore I thought about deploying a shared Linux AMI, instead of building and configuring the server myself. I want to make my life easy, that is, deploying something pre-existing which is easy to manage (web user interface) since I haven't got time to learn all about server management right now.
I found this list of images. Which one of these would fit my requirements?
This is an inappropriate use case for EC2. As Amazons CTO Werner Vogels said a few months ago "an EC2 instance is not a server, it's a building block." EC2 is used to provide computing resources to an application that spans multiple, loosely-coupled services. It's not a drop in replacement for a standard VPS.
That's not to say that a lot of people aren't using EC2 instances as servers. However, these are often the same people who bitterly complain about excessive downtime on AWS without realizing that it's mostly their own fault. An application must be designed to be deployed in a cloud-based environment when it's built on an IaaS platform like AWS. If your application is not aware of autoscaling groups and other high-availability features then traditional dedicated hosting will be cheaper, less complex, and more durable than AWS.
I am aware of AMI's for webmin, but not for cPanel. Here is the link:
https://www.virtualmin.com/documentation/aws/virtualmin_gpl_ami
I would echo the comments made by #jamieb however in that this is really not a good use case for EC2. You are limited to a single elastic IP per instance, so you have no ability to do IP-based virtual hosts as you would with a typical VPS.

Resources