NodeJS / ERP - Performance / Scalability - node.js

In the company I work for,
we plan to renew and re-code our 12 years old , online sales web application.
Our traffic is a bit high ; over 100.000 sales orders a day
means there will be at least 1 million interactions for a day on the web application.
I'm want to use NodeJS as web the server which will be integrated to our ERP system running on Oracle Exadata database.
My question is :
Performance is Very Very critical for us, I'm not sure NodeJS is scalable enough for this high transaction count.
I've read some blogs on internet which states some very very big companies uses NodeJS already,
but I'm not sure they use it as main & backbone system or only for some smaller applications in corporate usage.
Can you share your experiences , if possible with examples including transaction count ?
Thanks in advance !

Why are you looking at Node.js? What other options are you considering? Why choose one over the other? What expertise does your team have?
Node.js is quite scalable, provided you know what you're doing. How much of your load is mid-tier vs database? If there's a lot going on in the mid-tier, then you need to be able to scale it out horizontally. Here are a few high-level things to consider:
Many people use Docker to containerize their apps and scale them out with Kubernetes (though those aren't Node.js specific).
You'll likely want to learn about PM2 to keep your Node.js processes running.
Use node-oracledb connection pools.
Use bind variables for security and performance.
Look into using DRCP if you are using Kubernetes and each container has it's own connection pool.
Consider looking through this guide to creating a REST API with Node.js and Oracle Database to get an idea of how things work:
https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/

Related

Is Amazon EC2 free tier server appropriate for my little web application?

I'm building a little software activation web service in Java, so I need a cloud-based server which will run Apache and Tomcat and MySQL.
It will get very little usage as I don't expect to sell very much product at first. I'll be very lucky if the server handles one quick activation a day ... if it got 20 in a day that would be an amazing success.
I'm looking at Amazon EC2 pricing here ...
https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc
I see that there is a "Free Tier" which provides "750 hours per month of Linux t2.micro or t3.micro instance". And it's free for year.
STUPID QUESTION #1: 24h/day x 31 days/month is 744 hours ... so, does that mean I'm getting a free linux server running 24/7 for a year or is there a catch that I'm missing?
STUPID QUESTION #2: t2.micro/t2.micro has 1 vCPU, 1GB Memory ... is that enough power to run a simple Apache + Tomcat + MySQL web service reliably?
STUPID QUESTION #3: Any reason why I should skip the free tier and invest in a powerful pay $$$ option?
Yes. No catch. It's just not a very strong server.
That really depends on what that service does. Performance wise you need to pay attention to t2 instances being optimized for burst operations. That means they run full speed for a little while and then get throttled. But if you're talking about reliability, it's a whole other story. Just one machine is usually not enough for that. You want multiple machines in multiple data centers. What if one machine goes down? What if the whole data center goes down? It really depends on just how reliable you want it.
That really depends on what you're looking for. If you don't know yet, stick to free until you figure it out. I would even go for something simpler like Heroku at first. At least you won't have to take care of the reliability aspect as much.
You describe your service as: "Accept an encrypted license key, decrypt it, verify it, return and encrypted boolean response".
This sounds like an excellent candidate for a serverless solution:
AWS API Gateway providing an HTTPS endpoint that the application can call
It then triggers an AWS Lambda function that performs the logic and exits
However, you also mention a MySQL database. This could be provided by Amazon RDS. Or, you could go serverless and use DynamoDB (a NoSQL database).
The benefit of a serverless architecture is that it can scale to handle high loads and doesn't cost you anything (except potentially for the database) when not being used.
There is a free tier available for AWS API Gateway, AWS Lambda, Amazon DynamoDB and Amazon RDS.
There might be a limitation on network traffic for EC2 instances. You should look into that before deciding to host a web service on it. There is even a possibility it could charge you for using too much network bandwidth, so scalability might be an issue. I suggest you try Heroku instead, and then switch to other app hosting services when if and when you need to scale.
Yes, i have developed an low to medium web application as mysql backend.But, please be sure about number of users , as it depends on the performance and scalability.
If you are looking for very little usage EC2 is the best matching free tire which provides by the AWS.
The EC2 Micro instances to keep under the AWS Free Tier, which covers 750 hours of t2. micro instances. And the servers are available Linux as well as windows
When we talking about the second question it depends on your application type. As per the question that you asked 8GB is enough to run your apache and SQL.
But when it comes to reliability, it's a different story. In most cases, one machine is insufficient. You'd like to have multiple machines in different data centers. So, in that case, it is better to move to another service.
When we talking about your 3rd question, it also depends on the applicability of your application. If your application having a high number of users and many concurrent processes and if you need to improve the reliability, it is good to move to pay subscriptions.

Which heroku dynos is better for 1500+ active users on application?

I have deployed my nodeJS backend on Heroku Hobby dynos. There are 1500+ active users. So the API response time is very slow some times, Please help to figure out which dynos is better for backend deployement.
It always depends on your application. What type of operations and workload are you handling in your API, do you have any synchronous/blocking operation? Is there a lot of I/O involved? More information about what you are trying to achieve would be helpful to give a better recommendation.
One best practice for Node.js is to scale horizontally, this means, having multiple small servers to handle traffic instead of having one big server (vertical scaling). So, a good recommendation is to scale using multiples dynos, try to scale to 2 and measure again to see if it fits your performance needs.
Some recommended readings:
Good practices for high-performance and scalable Node.js applications
Optimizing Node.js Application Concurrency

Sails.js (Node.js) server architecture, scaling and performance

I want to create Sails.js (Node.js) server app, which will provide API for single-page-app. This server will consist of multiple modules:
user management
forum
chat
admin GUI
content management
payment gateway
...
All these modules will share one database. The server must be able to handle as many requests and web sockets as possible. Clean architecture and performance are my primary goals.
My questions:
Should I create multiple servers running on multiple ports? I mean, one server for content management module. Another server for forum management module.
Or is it better to create only one big universal server, which will consists of multiple separate modules (hooks in Sails.js) and runs on one port? Will performance of the server decrease in this case ?
I was thinking about vertical scaling one big universal server, running on single port with pm2. Or is it better to scale Node.js horizontaly and split server to multiple smaller servers ?
Im new to Node.js so I appreciate any advice.
I think it really boils down to the scale of the project.
For very simple things there's no real reason to scale past a single but reliable server is there?
However for broader projects that have a back-end that is resource intensive and a lot of users and traffic, you may a want to split the back / front end aspects depending on the requirements.
In which case you might have a single server (or more) dealing with the specific administrative requests or routines then have the client / user API running through a load balancer and spread across multiple servers in multiple regions or break it down further into an auto scaling group so as to accommodate for fluctuations in traffic.
It would be worthwhile to note too that this is really suited for higher volumes of traffic or resource usage as you're dedicating the server infrastructure for this purpose, for smaller applications where there is infrequent usage then breaking things down into micro services from the start and getting billed for the runtime rather than dedicated infrastructure utilization might make more sense to me. You could take a look at AWS API Gateway and Lambda services for some more information on that (I am not affiliated to AWS in any way, I just appreciate what they have managed to put together there).

How do I make NodeJS applications scalable

I am designing a chat application in NodeJS using express, mongo db, socket io. What points should I keep in focus while designing the architecture for this application. The target audience for this app is going to be more then 50K users concurrently using it.
I have previously in my career designed apps that were used by 2k end users at max. But this is something new for me. I did a lot of research on it and came up with the following points.
1- Start using queuing services like RabbitMQ
2- Increase your server space/ram as the usage increases.
Can someone please point me in the write direction a book on NodeJS architecture patterns and scalability. A guide, a walk through any sort of help is highly appreciated.
Here some tips:
You should take a look at the Cluster module you can also use wrk for HTTP benchmark.
Make sure you use caching.
If you are using Docker you should use the swarm mode.
Use Amazon Elastic https://aws.amazon.com/ec2/

Testing a Windows Azure web app for maximum user load

I am conducting some research on emerging web technologies and have created a very simple Azure website which makes use of web sockets and mongo db as the database. I have managed to get all the components working together and now must perform load testing on the application.
The main criteria is the maximum user load that the app can support, at the moment there is 1 web role instance, so probably I would need to test the max user load for that instance, then try with 2 instances and so on.
I found some solutions online such as Loadstorm, however I cannot afford to pay to use these services so I need to be able to do this from my own development machine OR from another cloud service.
I have come across Visual Studio Load Tests and they seem quite useful, however it seems they require VS Ultimate and an active msdn subscription - the prerequisites are listed here. Also, from this video which shows the basics of load tests, it seems like these load tests are created completely separately from the actual web project, so does that mean I can only see metrics related to the user? i.e. I cannot see the amount of RAM being used, processor etc.
Any suggestions?
You might create a Linux virtual machine in Azure itself or another hosting provider and use ApacheBench (ab) or JMeter to do simple load testing on your application. Be aware that in such a setup your benchmark servers may be a bottleneck themselves.
Another approach is to use online load testing services wich allow some free usage, such as:
loader.io, by SendGrid Labs
LoadStorm
Blazemeter
Blitz
Neotys
Loadimpact
For load-testing, LoadStorm is very reasonably priced, especially compared to on-premises software (and has a free tier with up to 25 virtual clients). You can install code such as jmeter, but you'll still need machines (or vm's) to host and run it from, and you need to make sure that the load-generator machines aren't the bottleneck in your tests.
When you run your tests, you may want to consider separating your web tier from MongoDB. MongoDB will consume as much memory as possible (as that's what gives MongoDB its speed). In a real-world scenario, you'll likely have MongoDB in its own environment. So for your tests, I'd consider offloading MongoDB to its own instance(s), and 10gen has a Worker Role setup that's fairly straightforward to install.
Also remember that NIC bandwidth is 100Mbps per core, which could be a limiting factor on your tests, depending on how much load you're driving.
One alternative to self-hosting MongoDB: Offload MongoDB to a hoster such as MongoLab. This will allow you to test the capacity of your web app without worrying about the details around MongoDB setup, configuration, optimization, etc. Currently MongoLab offers their free tier hosted in Azure, US West and US East data centers.
Editing my response, didnt read the question carefully.
Check out this thread for various tools and links:
Open source Tool for Stress, Load and Performance testing
If you are interested in finding the performance counters of the application under test you can revisit some of the latest features added to Visual Load Cloud base load test.
http://blogs.msdn.com/b/visualstudioalm/archive/2014/04/07/get-application-performance-data-during-load-runs-with-visual-studio-online.aspx
To get more info on Visual Studio Cloud Load Testing solution - https://www.visualstudio.com/features/vso-cloud-load-testing-vs

Resources