EC2 vs Elastic Beanstalk vs Lambda - node.js

I have simple API, with connection to DB, calls to FB API etc.
What is the best way to serve it.
1) I have started with EC2 first.
Good: Cheap enough. I can control everything
Bad: Long set up process. Need to control everything. Set up monitoring tools etc by myself. Keep in mind a lot.
2) Next I have moved NodeJS to EB and move DB to RDS.
Good: Just commit a code, all other things handled by service
Bad: Load Balancer + Multiple instance + RDS costs a lot.
3) Lambda, thinking about moving to Lambda + API Gateway setup
It is look easy to implement, monitoring and support
Have no idea how much money it will cost.
I know that there is a lot of configuration inside.
Do you have any suggestion what will be the best for simple API?
Also I thinking about moving only picture generation to Lambda,
and keep simple API like AUTH, GET users etc on EB.

If you are sure that the processing logic does not exceed 5 minutes, then Option 3 will be definitely desired - as you write functions and deploy them in Lambda. No other deployment and auto scaling worries.
Of course, subject to the other factors like dependency on third party libraries for your logic, and compatibility with Lambda underlying image.

Related

Microservices on GCP

I am looking to use GCP for a micro-services application. After comparing AWS and GCP I have decided to go with Google because one major requirement for the project is to schedule tasks to run in the future (Cloud Tasks) which AWS does not seem to offer an equivalent of.
I am planning on containerizing my services and deploying to GCP using Cloud Run with a Redis cluster running as well for caching.
I understand that you cannot have multiple Firestore instances running in one project. Does this mean that all if my services will be using the same database?
I was looking to follow a model (possible on AWS) where each service had its own database instance that it reached out to.
Is this pattern possible on GCP?
Firestore indeed is for the moment limited to a single database instance per project. For performance that is usually not a problem, but for isolation such as your use-case, that can indeed be a reason to look elsewhere.
Firebase's original Realtime Database does allow multiple instances per project, and recently added a REST API for provisioning database instances. Since each Google Cloud Project can be toggled to also be a Firebase project, you could consider that.
Does this mean that all if my services will be using the same database?
I don't know all details of your case. Do you think that you can deploy a "microservice" per project? Not ideal, especially if they are to communicate using PubSub, but may be an option. In that case every "microservice" may get its own Firestore if that is a requirement.
I don't think one should consider GCP project as some kind of "hard boundaries". For me they are just another level of granularity - in addition to folders, etc.
There might be some benefits for "one microservice - one project" appraoch as well. For example, less dependent lifecycles, better (more accurate) security, may be simpler development workflows...

Is Amazon EC2 free tier server appropriate for my little web application?

I'm building a little software activation web service in Java, so I need a cloud-based server which will run Apache and Tomcat and MySQL.
It will get very little usage as I don't expect to sell very much product at first. I'll be very lucky if the server handles one quick activation a day ... if it got 20 in a day that would be an amazing success.
I'm looking at Amazon EC2 pricing here ...
https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc
I see that there is a "Free Tier" which provides "750 hours per month of Linux t2.micro or t3.micro instance". And it's free for year.
STUPID QUESTION #1: 24h/day x 31 days/month is 744 hours ... so, does that mean I'm getting a free linux server running 24/7 for a year or is there a catch that I'm missing?
STUPID QUESTION #2: t2.micro/t2.micro has 1 vCPU, 1GB Memory ... is that enough power to run a simple Apache + Tomcat + MySQL web service reliably?
STUPID QUESTION #3: Any reason why I should skip the free tier and invest in a powerful pay $$$ option?
Yes. No catch. It's just not a very strong server.
That really depends on what that service does. Performance wise you need to pay attention to t2 instances being optimized for burst operations. That means they run full speed for a little while and then get throttled. But if you're talking about reliability, it's a whole other story. Just one machine is usually not enough for that. You want multiple machines in multiple data centers. What if one machine goes down? What if the whole data center goes down? It really depends on just how reliable you want it.
That really depends on what you're looking for. If you don't know yet, stick to free until you figure it out. I would even go for something simpler like Heroku at first. At least you won't have to take care of the reliability aspect as much.
You describe your service as: "Accept an encrypted license key, decrypt it, verify it, return and encrypted boolean response".
This sounds like an excellent candidate for a serverless solution:
AWS API Gateway providing an HTTPS endpoint that the application can call
It then triggers an AWS Lambda function that performs the logic and exits
However, you also mention a MySQL database. This could be provided by Amazon RDS. Or, you could go serverless and use DynamoDB (a NoSQL database).
The benefit of a serverless architecture is that it can scale to handle high loads and doesn't cost you anything (except potentially for the database) when not being used.
There is a free tier available for AWS API Gateway, AWS Lambda, Amazon DynamoDB and Amazon RDS.
There might be a limitation on network traffic for EC2 instances. You should look into that before deciding to host a web service on it. There is even a possibility it could charge you for using too much network bandwidth, so scalability might be an issue. I suggest you try Heroku instead, and then switch to other app hosting services when if and when you need to scale.
Yes, i have developed an low to medium web application as mysql backend.But, please be sure about number of users , as it depends on the performance and scalability.
If you are looking for very little usage EC2 is the best matching free tire which provides by the AWS.
The EC2 Micro instances to keep under the AWS Free Tier, which covers 750 hours of t2. micro instances. And the servers are available Linux as well as windows
When we talking about the second question it depends on your application type. As per the question that you asked 8GB is enough to run your apache and SQL.
But when it comes to reliability, it's a different story. In most cases, one machine is insufficient. You'd like to have multiple machines in different data centers. So, in that case, it is better to move to another service.
When we talking about your 3rd question, it also depends on the applicability of your application. If your application having a high number of users and many concurrent processes and if you need to improve the reliability, it is good to move to pay subscriptions.

How to serve node.js service for worldwide customers and fast?

I have a local VPS that hosting and providing my Node.js REST API in my country.
However soon I will need to open it for different countries.
That means that clients from remote will ask for my services.
Since they are far it will be probably slow connection.
How can I avoid this? Maybe I need more servers located in their countries too, but still, how the data could be shared over one DB?
I do not looking for a full tutorial for how to do that (could be nice to have) but I am looking for get info about the methodology of this.
What do you recommend to do, keep buying servers in remote countries, sharing their data between them someway, or maybe choose to use some cloud service like Firebase? How cloud services work in first place?
Without going into too much detail for each item, here are some keypoints in which I think you should focus your on learning to solve your problem.
For data storage - look into firestore (not the json database) as firestore is globally scaleable.
For your REST endpoints I would use google cloud functions, but without knowing the nature of your application its hard to say if its suitable. The key to being able to reach global scale is having cacheable endpoints. Then you are leveraging google's global CDN which is much faster than hitting the origin server. Note: The firebase cloud functions infrastructure WILL face cold start issues which may/may not be a problem for you.
Cache invalidation is a little lacking so you can leverage longer max-age cache settings but use either cache busing and/or the header stale-while-revalidate to help with this.
There is some great info here https://www.youtube.com/watch?v=dbV-293m1dQ that covers some of what I have mentioned in more detail.

Performant API to handle many requests

I am developing an iOS app to show news from a MongoDB database. The app has around 50,000 active users, so it's quite heavy on the server. I am trying to rethink how the API should be built. I have just learned a little about AWS API Gateway, Google Cloud Functions, Firebase, etc.
If I simply need a few functions to extract list of news, list of users, etc., what would be the best way to build this API as of 2017? I have always thought I should simply create a Node.js server with some endpoints. But now it seems it's more performant to create separate endpoints with, for instance, AWS API Gateway which each points to an AWS Lambda function.
But what is really the most scalable option?
Cloudfront (With Caching) --> API Gateway --> Lambda would be a scalable solution. Since you have not chosen DynamoDB, you need to manage mongoose for your storage and avaibility.
To make it bit more advanced, you can use Lambda Edge with Cloudfront to make it active/active across more than one region. So if One region goes down, app will still be available.
When you say "Performant", what does that mean in your scenario?
If you need a few functions that extract a list of news, a list of users and etc, it doesn't sound like performance would be an issue.
If you're wondering if the AWS Serverless Stack can handle thousands or millions of requests, the answer is yes, API Gateway + Lambda can handle requests at any scale.
If you only need to query data by its hash keys, DynamoDB would fit very well! Also, there's an auto scale feature. But, if you need to perform scans or some complex queries, that would be a little bit expensive and some times slow. For this scenario you can stream your DynamoDB data to Elasticsearch or a data warehouse solution.
I'm not sure if combining Lambda with MongoDB would be that great. Let's say you choose a hosted MongoDB service like Mongo Atlas, you'll need to add more complexity to your system like VPC Peering, and also, manage/optmize lambda connections to MongoDB (Optimizing AWS Lambda Performance with MongoDB Atlas). What if you're handling a million request calls, I guess a lot of Lambda Functions will start running in parallel, how many connections would be opened in your MongoDB?
Besides the AWS Serverless Stack, there's the Elastic BeansTalk that can easily scale your system.
All the solutions have pros and cons, you can scale with both, API Gateway + Lambda or Elastic Beanstalk(or some other vendor solution). I guess "scalable" is a built in feature that cloud vendors are offering these days, and "performance" is just one of the topics that should be analyzed when designing your infrastructure.

Easiest server and database services available for deploying an application (AWS specifically)

I have written a real-time multiplayer game and currently writing its server in NodeJS. I want my game to have login, level up etc, so I need to have a database. This is the first time I am deploying something and I am mostly self taught, so please correct me if I am mixing things up. Since this is my first trial, I do not want to make much commitment right away so I am looking for free options only. And since this should be a real-time game, I need a relatively fast server response. That is why I am looking for the easiest database and server provider that would do and I am aware that with those restrictions I have limited choices and functionality.
As far as I have read online, Heroku seems to be my simplest option for a server (that is why I started writing in NodeJS). However it seems like there is no free database service since all options on https://devcenter.heroku.com/articles/heroku-postgres-plans has monthly fee. I did not want to use Google App Engine since I am new (it certainly is not mentioned as beginner friendly).
So I have found AWS following Free Cloud Database Service for home development post, it seems like I could use Amazon Web Services as a server and database. However most posts I have encountered suggests Google App Engine or Heroku with little mention of AWS. Is this because I am mixing concepts up, or does AWS have drawbacks that I am not aware of? Do you think it is a good idea to use AWS for both as server and database, is it possible to use Heroku as server while using AWS as database or do you have any other suggestion?
Note: Sorry for the question bombardment but those are all related and I am sort of lost in this topic so I had to ask...
Use AWS EC2 for the server and RDS for the database. The reason why people use heroku is that it deploys to a custom url very quickly (it's easy to set up). Setting up AWS requires some knowledge of how servers work, but it's not that complicated (and it's free for small apps). Best of luck!

Resources