Post API request failing when deployed in server's. Locally working - node.js

Our rest API is a Node.js. There are few large data requests like 50 -100mb of data. So when I do post request locally i was able to get data. But when I deploy in apache servers and try to post then I get could not get any response within a minute. The time usually takes to fetch that huge data is a minimum of 4 mins. But on apache servers, it fails below minute and instances start restarting. I thought it was js heap out of memory so in package JSON I updated node --max_old_space_size=8092 --stack-size=85500 server.js this way.
From then I didn't get any memory error both local & in staging. Any idea where am I going wrong when deployed in servers.
Note - when requested for other data like small data it was fine.

Related

How to make sure all incoming requests are properly executed?

I have a Node JS / Express app that receive periodic requests to update some values in my database (MongoDb).
The problem is when my app receive too much request in a very tiny timelapse, the app acts weird and not all requests are executed. For example when It receive 400 requests in 10 minutes, some of them fail, but the same amount of reqs in 40-60 minutes give me a 100% of successful requests.
My app is hosted on heroku and according to my metrics, my dyno is not running out of memory. So I don't understand what's wrong with my failure rate increasing when I decrease the timelapse.
Any suggestion ?

AWS RDS seems to only process 3 requests at a time

I've got a laravel service that loads a reactjs page that fires off around 30+ axios calls after loading. When I look at the source tab, it looks like only 3 of the calls are being processed at a time.
I'm testing this by connecting to the AWS RDS instance from my local environment. I tried using a db.t3.medium and a db.t3.large with no noticeable change.
The applicate has multiple database connections. Each requests uses all three connection to gather the required data. All of the requests execute the exact same query from one database and then each of the requests executes a query on a different table in the second database.
Is there a reason why AWS isn't processing all of my requests simultaneously?
You aren’t looking at the good performance indicator. You are looking at your browser network console. Your browser limits the number of request it can do on the same host simultaneously.
You can find more information here: Max parallel http connections in a browser?

Express/NodeJS + Mongoose App server response slow

Issue
I have an Express (Node.JS) + MongoDB app with a server response load time of 4 - 7 seconds on average (slow).
I understand that the average server response time is under 200ms as per google pagespeed tools.
This app is fetching data from a mongoDB asynchronously but the roundtrip times to the database is extremely slow with each call averaging about 500ms - 1s. These calls are simple findAll calls to retrieve data of less than < 100 records.
Context
Mongoose version: 4.13.14
DB server's MongoDB version is 3.4.16
DB server is hosted on MongoDB Atlas M10 in AWS / Oregon (us-west-1)
Web server is hosted with now.sh in SFO1 (us-west-1)
Have performed recommended indexes as advised by MongoDB Atlas's performance advisor
Data fetching perfectly fine in local environment (local server + local db) as data is queried in a matter of few ms
Mongoose logs for the affected page can be found in this gist
Mongo Server configuration
Mongo Atlas M10
2GB Ram
10 GB Storage
100 IOPS
Encrypted
Auto-expand storage
Attempted solutions:
I have checked my DB metrics, they looked fine. There are also no slow queries. These are simple findAll queries. Performance advisor on mongo atlas reports nothing unusual.
The production application and database are both hosted in the same region.
I have already tried optimising the application layer of the query (mongoose) by running .lean()
Question:
Where else should i look to improve the database latency? How can a simple query take so long? Otherwise, why is my server response time taking up to 4s when the expected is about 200ms?
Hey you can try hosting your server and database in the same region. I think the network is creating a overhead in this case. If the server and the database are in the same region, They are on the same network which will reduce the latency significantly. there is a diagram on aws for this
I add some problem like yours with an app that i developed in my master degree. I add to put a node.js api running online to present it in class room.And i realized that every time i wanted to make a call in the api the response was taking allot of time. I realized that one of the problems was the school network because of the firewalls. Also the place where i put the server heroku.com was giving some delay as well. What i did was use Redis ( https://redis.io/ ) to improve the performance, also heroku was giving me some problems because of the requests being http and not https.
Make a test running the app and data on your localhost and see the performance. if you don´t have any issue try to check if nothing is messing with your request like the place where you host your node server.
Let me know if this helps or if you still have issues so i can try to help you out better.
I had the same issue once with my nodejs code using the same development stack(mongodb,nodejs), I got into trouble of late response from api, and after spending a lot of time I found my server the real culprit I then changed from heroku to amazon aws EC2 instance and things started working fast and amazingly fast, so probably
your web server is culprit
to make sure mongodb is not culprit, write an api endpoint where you can just return some json response without making any query to database.

Simple message passing Nodejs server accepting only 4 requests at a time

We have a simple express node server deployed on windows server 2012 that recieves GET requests with just 3 parameters. It does some minor processing on these parameters, has a very simple in-memory node-cache for caching some of these parameter combinations, interfaces with an external license server to fetch license for the requesting user and sets it in the cookie, followed by which, it interfaces with some workers via a load balancer (running with zmq) to download some large files (in chunks, and unzips and extracts them, writes them to some directories) and display them to the user. On deploying these files, some other calls to the workers are initiated as well.
The node server does not talk to any database or disk. It simply waits for response from the load balancer running on some other machines (these are long operations taking typically between 2-3 minutes to send response). So, essentially, the computation and database interactions happens on other machines. The node server is only a simple message passing/handshaking server that waits for response in event handlers, initiates other requests and renders the response.
We are not using a 'cluster' module or nginx at the moment. With a bare bones node server, is it possible to accept and process atleast 16 requests simultaneously ? Pages such as these http://adrianmejia.com/blog/2016/03/23/how-to-scale-a-nodejs-app-based-on-number-of-users/ mention that a simple node server can handle only 2-9 requests at a time. But even with our bare bones implementation, not more than 4 requests are accepted at a time.
Is using a cluster module or nginx necessary even for this case ? How to scale this application for a few hundred users to begin with ?
An Express server can handle many more than 9 requests at a time, especially if it isn't talking to a datebase.
The article you're referring to assumes some database access on each request and serving static assets via node itself, rather than a CDN. All of this taking place on a single CPU with 1GB of RAM. That's a database and web server all running on a single core with minimal RAM.
There really are not hard numbers on this sort of thing; You build it and see how it performs. If it doesn't perform well enough, put a reverse proxy in front of it like nginx or haproxy to do load balancing.
However, based on your problem, if you really are running into bottlenecks where only 4 connections are possible at a time, it sounds like you're keeping those connections open way too long and blocking others. Better to have those long running processes kicked off by node, close the connections, then have those servers call back somehow when they're done.

Improving response time in AngularJS web app using mongolab, nodejs and express

I am developing an Angular Web App that receives its data from a nodejs/express API.
This API runs mongoose that connect to MongoLab (the free account).
When receiving data, I experience a response time > 500ms for small data sets (1.5kb) and > 1s for "large" data sets (hundreds of kb).
This is clearly already too much and I am affraid it will be even worse when my db will grow.
The current process is as follow:
Client goes to mysite.com/discover
Server send the Angular App
Client does an ajax request to mysite.com/api/collections
Server connects to MongoLab, receives data
Server send back data to client
This process is very fast in local development (local node, local MongoDB) (<20ms) but takes so much time when put online. I investigated what was taking so much time and I found two equal contributions:
API response time
MongoLab response time
The MongoDB query takes no time (<1ms).
The Question
What are my options to reduce this response time? Is it possible to store locally the data and use mongoLab as a "copy" (it would remove the MongoDB latency in most cases)? If so, would you suggest disk temporary storage, mongoDB replica, ...?
What I tried
I migrated my mongoLab DB to match the physical localization of my server (VM on digitalocean), it improve by a few 50ms, not much more.
Thanks a lot

Resources