Postgresql IPC: MessageQueueSend delaying queries from nodejs backend - node.js

I am testing postgresql with a nodejs backend server, using Pg npm module to query the database. The issue I am having is that when I run a particular query directly on the postgres database table using query tool on pgAdmin4, the data is fetched within 5 seconds. But the same query when requested from the backend through my nodejs server, the process is split between parallel workers and a client backend using IPC: messagequeuesend, this runs for almost 17minutes before return the data. I can't understand why the same query is fast using query tool, it just processes it fast but the one coming from my server has to delay. Is there a way to increase the priority for queries coming from backend to run like it was queried inside pgAdmin. I noticed when I check pg_stat_activity, there is an application value for the query when using query tool, but when the same query comes from the nodejs server the application value is null. I do not seem to understand why its like this, i have been searching every community for answers to this for the past 5 days, and there is no question or answer for this. Please any help will be appreciated. Thanks in advance
I tried running a query from the backend, but its split using IPC processes and result comes in after 17 minutes, the same query takes only 5 seconds to return a result inside pgAdmin query tool

Related

Fetch data from external API and populate database every minutes

I would like to fetch data from external API with limited request and populate my database. My concern is more about the architecture, language and tools to use. I would like to have a big picture in term of performance and good practise.
I did make an cron with nodejs and express running every minutes and populate my database and it works. On the same server i did created some routes to be called for client.
What should be better to do rather than using cron on nodejs ? I know that i can also make cron under linux calling a script whatever it's python or nodejs. But what would be the good practise ? Specially if i want more cron instead of a single one ?
Should i separate my cron into another instance to not block any request from client ? If my server is already busy retrieving data from external API while someone is calling a route in the same server does it will increase the latency ?
There is some tools to monitor my tasks instead of using logs ?
As i know node js is better to handle big amount of requests than a few other servers but if you are able to change the framework then you can give chance to https://bun.sh/.
also, you can try multithreading in node.js it can be more affordable and easy.
https://www.digitalocean.com/community/tutorials/how-to-use-multithreading-in-node-js

Express/NodeJS + Mongoose App server response slow

Issue
I have an Express (Node.JS) + MongoDB app with a server response load time of 4 - 7 seconds on average (slow).
I understand that the average server response time is under 200ms as per google pagespeed tools.
This app is fetching data from a mongoDB asynchronously but the roundtrip times to the database is extremely slow with each call averaging about 500ms - 1s. These calls are simple findAll calls to retrieve data of less than < 100 records.
Context
Mongoose version: 4.13.14
DB server's MongoDB version is 3.4.16
DB server is hosted on MongoDB Atlas M10 in AWS / Oregon (us-west-1)
Web server is hosted with now.sh in SFO1 (us-west-1)
Have performed recommended indexes as advised by MongoDB Atlas's performance advisor
Data fetching perfectly fine in local environment (local server + local db) as data is queried in a matter of few ms
Mongoose logs for the affected page can be found in this gist
Mongo Server configuration
Mongo Atlas M10
2GB Ram
10 GB Storage
100 IOPS
Encrypted
Auto-expand storage
Attempted solutions:
I have checked my DB metrics, they looked fine. There are also no slow queries. These are simple findAll queries. Performance advisor on mongo atlas reports nothing unusual.
The production application and database are both hosted in the same region.
I have already tried optimising the application layer of the query (mongoose) by running .lean()
Question:
Where else should i look to improve the database latency? How can a simple query take so long? Otherwise, why is my server response time taking up to 4s when the expected is about 200ms?
Hey you can try hosting your server and database in the same region. I think the network is creating a overhead in this case. If the server and the database are in the same region, They are on the same network which will reduce the latency significantly. there is a diagram on aws for this
I add some problem like yours with an app that i developed in my master degree. I add to put a node.js api running online to present it in class room.And i realized that every time i wanted to make a call in the api the response was taking allot of time. I realized that one of the problems was the school network because of the firewalls. Also the place where i put the server heroku.com was giving some delay as well. What i did was use Redis ( https://redis.io/ ) to improve the performance, also heroku was giving me some problems because of the requests being http and not https.
Make a test running the app and data on your localhost and see the performance. if you donĀ“t have any issue try to check if nothing is messing with your request like the place where you host your node server.
Let me know if this helps or if you still have issues so i can try to help you out better.
I had the same issue once with my nodejs code using the same development stack(mongodb,nodejs), I got into trouble of late response from api, and after spending a lot of time I found my server the real culprit I then changed from heroku to amazon aws EC2 instance and things started working fast and amazingly fast, so probably
your web server is culprit
to make sure mongodb is not culprit, write an api endpoint where you can just return some json response without making any query to database.

Difference between rethinkdb query and API results

I am having a problem with retrieving values from my rethinkdb database and exposing them in my API.
Everything is running without errors, but I get different results when querying the database with python then the API.
I have created the database, tables and inserted data with REPL queries in python.
My setup is like this:
- AWS ec2 (Ubuntu)
- Rethinkdb as database
- Node API: clone of https://github.com/yoonic/atlas
I have no clue why there is a difference or where to look next for debugging.
Any help to get me going is appreciated!

Solr ECONNRESET during load tests with Node.js

I tried to create some load on our Solr server by means of a simple Node.js script which executes requests in a bluebird-"Promise.map". When i drive this up to about 1000 "parallel" requests, solr starts to close connections and i get "ECONNRESET" errors in Node.js.
I'm supprised, since it would assume that solr (or better to say jetty) should be able the handle this amount of requests. I don't see any indication for errors in the Solr log.
1.) Should solr / jetty be able to handle this?
2.) Would it be expected, that "ECONNRESET" get more frequent, if solr has to process multiple heavy queries?
3.) If 1. and 2. shouldn't be an issue, are there any suggestions, why this happens?
Thanks a lot!

EMFILE error on bulk data insert

I'm developing an loopback application to get data using oracledb npm module from ORACLE and convert it to JSON format to be stored in MONGODB.
MONGODB is accessed using "loopback-connector-mongodb".
The data to be stored would be around for 100 collections as of for 100 tables from ORACLE.
I'm pushing data with http request row by row for the entire collection list from node server from my local application to another server application on remote machine using http-request through remote method call.
When the data write operation the remote machine application stops throwing an error showing "EMFILE error".
I googled and some reference showed that it is because of the maximum number of opened files/sockets. Hence i tried disconnecting the DataSource for each request. Still i'm getting the same error.
Please help me on this!!
Thanks.
If you are making an http request for each row of data, and aren't throttling or otherwise controlling the order of those requests, it is possible you are simply making too many requests at once because of node's async io model.
For example, making those calls in a simple for loop would actually result in all of them being made in parallel.
If this is the case, you might want to consider using something like the async module, which includes some utilities for throttling the parallelism.
Don't forget Oracle Database 12.1.0.2 has JSON support. Maybe you don't need to move the data in the first place?
See JSON in Oracle Database. To quote the manual:
'Oracle Database supports JavaScript Object Notation (JSON) data
natively with relational database features, including transactions,
indexing, declarative querying, and views.'

Resources