How to decrease response time in Nodejs with RDS Postgres - node.js

My service returns a response of 250ms with 100 users when I increased users' response time increased 600ms to 2sec I want to get a 200ms response with 1000 users or requests per second,
I have deployed my App on AWS-EBS the database which I am using is rds-Postgres
in every request, I am saving every user log to the database
what can I do to decrease my time?
someone said me I should use Redis and save the users' data init and after a 10 records Redis will update the database is this a good option or is there any other option that i can use to achieve my goal

Related

mongodb Atlas server - slow return

So I understand how some queries can take a while and querying the same information many times can just eat up ram.
I am wondering is their away to the following query more friendly for real-time requests?
const LNowPlaying = require('mongoose').model('NowPlaying');
var query = LNowPlaying.findOne({"history":[y]}).sort({"_id":-1})
We have our iOS and Android apps that request this information every second - which takes toll on MongoDB Atlas.
We are wondering if their is away in nodeJS to cache the data that is returned for at least 30 seconds and then fetch the new playing data when the data has changed.
(NOTE: We have a listener script that listen for song metadata to change - and update NowPlaying for every listener).
MongoDB will try doing its own caching when possible of queried data in memory. But the frequent queries mentioned may still put too much load on the database.
You could use Redis, Memcached, or even in-memory on the NodeJS side to cache the query results for a time. The listener script referenced could invalidate the cache each time an update occurs for a song's metadata to ensure clients get the most up-to-date data. One example of an agnostic cache client for NodeJS is catbox.

node postgres connection perfomance improvement

I am writing my backend using nodejs (express), and my database is PostgreSQL. I am using node-postres (pg) to connect to the postgres database.
Currently I am using the pg.Pool concept, so that I will have clients connected to serve the request/response. What I observe is the node-postgres takes more than 2 seconds to connect to the DB, and hence the response time is quite long.
In the node-postgres document, they mention that initial connection takes only 20 - 30 milliseconds. But I see more than 2-3 secs, to establish the connection. I did a load test on my app hitting around 1000 requests/ sec, but the average response time, is quite high due to the initial connection establish time. I have only a single SELECT query, where I get the response. The response processing time is very less, only the connection and getting data from DB takes more time.
I tried all the ways of releasing a client to the pool, after receiving the response etc.. Now I'm using pool.query which will take care of connecting, as well as releasing the client to the pool, after the task is done.
Is there any alternate for node-postgres which an provide a better performance for the DB operations.

Azure Offline DataSync - Insert multiple records on single request

I am using Azure offline data sync framework to sync mobile app data to server. But I am seeing huge performance hit. Time taken to sync 2000 records of data is around 25 min. When I had further analysis, each http request is taking around 800ms during PushAsync.
Can someone help me how to send multiple records as part of http request during PushAsync operation?

Should i cache data in my server or Just rely on MongoDB

I have a website which runs on Heroku and i am using Mongo Atlas as my database. I have tested the mongo connection speeds and found its around 5ms to 20ms based on the data what i am retrieving
Note: Both Heroku app and Mongo Atlas are in same aws zone.
Now my question is i have a collection with around 10K records which my users query frequently. For this usecase should i cache those 10K records in the server or should i leave it to MongoDB and live with the ~15ms overhead? What are your thoughts?
If its just one MongoDB call then i would say do not cache and leave it to the MongoDB to cache. In a real world scenario average response time will be around 300ms to 900ms (based on my pingdom results for my website) so when you compare the delay with the response time its relatively very low. So you are saving like a 15ms from ~900ms.
So better stay with the mongoDB for cleaner code and easy maintenance.

Mongodb or mongoose performance issue

I developed an application that backed up with nodeJs/mongodb and frontend is in Angularjs 1.6.5. Application is used to provide realtime analytics and data is being increased in size every single minute with new data.
As the database size is increasing, queries are taking much longer time to execute and even sometime fails and giving 404 error. (Nginx server) But the strange thing is that whenever i ran the same query in mongo shell in server with .explain('executionStats') method its giving immediate responses with execution time 0 millisecond.
So as per above screenshots, with limit of 10 data mongo shell executing it in no time but in browser when i am hitting the db through node and mongoose it took 1 min but couldn't possible return the result (maybe nginx returning 404 after specif time)
but i try in mongo shell after that without setting up any limit that also returning data with 6331522 records in less then 4 seconds.
I have no clue what is the issue exactly. Any help would be appriciated

Resources