AWS RDS seems to only process 3 requests at a time - amazon-rds

I've got a laravel service that loads a reactjs page that fires off around 30+ axios calls after loading. When I look at the source tab, it looks like only 3 of the calls are being processed at a time.
I'm testing this by connecting to the AWS RDS instance from my local environment. I tried using a db.t3.medium and a db.t3.large with no noticeable change.
The applicate has multiple database connections. Each requests uses all three connection to gather the required data. All of the requests execute the exact same query from one database and then each of the requests executes a query on a different table in the second database.
Is there a reason why AWS isn't processing all of my requests simultaneously?

You aren’t looking at the good performance indicator. You are looking at your browser network console. Your browser limits the number of request it can do on the same host simultaneously.
You can find more information here: Max parallel http connections in a browser?

Related

Is there any way to improve service availability over the ping test?

To check service availability we have added ping test but it does not check actual core functionality of the application. It just ping the server and return the response.
Is there any way to where we can check service core functionality is working over the ping test?
In most cases you need to check:
Database
APIs
Servers
Ping test generally just test Servers.
The most comprehensive way to test the backend is to make an API which read a value from the database (without caching), by this way you will test the three main cores.
BUT this way is heavy on the backend especially if you have a lot of users (for example if there is in the same moment 100K users on your app, there will e 100K connection to DB and 100K API requests/Response, which could make the server unavailable for other users).
The way I overcome this the following:
There is a public very small file on the server (not on DNS) that have the last time/date the backend was checked if it is functional.
for every user that opens the app, the app will read this file.
if it could not read it then the servers are down for sure.
if the app could read the file, then it will check if the Current time - last check time > 1 minute then it will call an API CheckBackend which will check everything and update the small file.
by this method you will ensure that at max one full check is done every minute only, which is not that heavy on the server.
Usually, applications use from ports. Instead of ping you can use telnet request to IP:Port. Like this:
telnet test.netbeez.net 20011
Also, you should visit here for to get more information https://netbeez.net/blog/telnet-to-test-connectivity-to-tcp/

Alternative to GraphQL long polling on an Express server for a large request?

Objective
I need to show a big table of data in my React web app frontend.
My backend is an Express server with a GraphQL layer and a few "normal" endpoints.
My server gets data from various sources, including an external API, which is the data source for my current task.
My server has a database that I can use freely. I cannot directly access the external API from my front end.
The data all comes from the external API I mentioned. In fact, it comes from multiple similar calls to the same endpoint with many different IDs. Each of those individual calls takes a while to return but doesn't risk timing out.
Current Solution
My naive implementation: I do one GraphQL query in which the resolver does all the API calls to the external service in parallel. It waits on them all to complete using Promise.all(). It then returns a big array containing all the data I need to my server. My server then returns that data to me.
Problem With Current Solution
Unfortunately, this sometimes leaves my frontend hanging for too long and it times out (takes longer than 2 minutes).
Proposed Solution
Is there a better way than manually implementing long polling in GraphQL?
This is my main plan for a solution at the moment:
Frontend sends a request to my server
Server returns a 200 and starts hitting the external API, and sets a flag in the database
Server stores the result of each API call in the database as it completes
Meanwhile, the frontend shows a loading screen and keeps making the same GraphQL query for an entity like MyBigTableData which will tell me how many of the external API calls have returned
When they've all returned, the next time I ask for MyBigTableData, the server will send back all the data.
Question
Is there a better alternative to GraphQL long polling on an Express server for this large request that I have to do?
An alternative that comes to mind is to not use GraphQL and instead use a standard HTTP endpoint, but I'm not sure that really makes much difference.
I also see that HTTP/2 has multiplexing which could be relevant. My server currently runs HTTP/1.1 and upgrading is something of an unknown to me.
I see here that Keep-Alive, which sounds like it could be relevant, is unusable in Safari which is bad as many of my users use Safari to access the frontend.
I can't use WebSockets because of technical restraints. I don't want to set a ridiculously long timeout on my client either (and I'm not sure if it's possible)
I discovered that GraphQL has polling built in https://www.apollographql.com/docs/react/data/queries/#polling
In the end, I made a REST polling system.

Prevent DDOS on websocket server nodejs

I have a app which lets yoy keep your notes at a single place its realtime bw all the devices you are logged in I am using a nodejs wesocket it was working fine but a recently i found out someone was sending a huge amount of requests to my websocket server. He sent a large amount of data through websockets to my mongodb and the data was sent just for the purpose of taking the app down (useless crap data just had 'aaaaa')
What i want is prevent those clients from using the websockets who are making more than 10requests per minute.
As mentioned in the comments its better to go with services like CloudFlare, but for your specific use case (to implement directly on server) you should look at ways to rate limit the requests.
Here is an example of an library to rate limit web-sockets in node
https://www.npmjs.com/package/ws-rate-limit

Simple message passing Nodejs server accepting only 4 requests at a time

We have a simple express node server deployed on windows server 2012 that recieves GET requests with just 3 parameters. It does some minor processing on these parameters, has a very simple in-memory node-cache for caching some of these parameter combinations, interfaces with an external license server to fetch license for the requesting user and sets it in the cookie, followed by which, it interfaces with some workers via a load balancer (running with zmq) to download some large files (in chunks, and unzips and extracts them, writes them to some directories) and display them to the user. On deploying these files, some other calls to the workers are initiated as well.
The node server does not talk to any database or disk. It simply waits for response from the load balancer running on some other machines (these are long operations taking typically between 2-3 minutes to send response). So, essentially, the computation and database interactions happens on other machines. The node server is only a simple message passing/handshaking server that waits for response in event handlers, initiates other requests and renders the response.
We are not using a 'cluster' module or nginx at the moment. With a bare bones node server, is it possible to accept and process atleast 16 requests simultaneously ? Pages such as these http://adrianmejia.com/blog/2016/03/23/how-to-scale-a-nodejs-app-based-on-number-of-users/ mention that a simple node server can handle only 2-9 requests at a time. But even with our bare bones implementation, not more than 4 requests are accepted at a time.
Is using a cluster module or nginx necessary even for this case ? How to scale this application for a few hundred users to begin with ?
An Express server can handle many more than 9 requests at a time, especially if it isn't talking to a datebase.
The article you're referring to assumes some database access on each request and serving static assets via node itself, rather than a CDN. All of this taking place on a single CPU with 1GB of RAM. That's a database and web server all running on a single core with minimal RAM.
There really are not hard numbers on this sort of thing; You build it and see how it performs. If it doesn't perform well enough, put a reverse proxy in front of it like nginx or haproxy to do load balancing.
However, based on your problem, if you really are running into bottlenecks where only 4 connections are possible at a time, it sounds like you're keeping those connections open way too long and blocking others. Better to have those long running processes kicked off by node, close the connections, then have those servers call back somehow when they're done.

Load test a Backbone App

I've got an NGinx/Node/Express3/Socket.io/Redis/Backbone/Backbone.Marionette app that proxies requests to a PHP/MySQL REST API. I need to load test the entire stack as a whole.
My app takes advantage of static asset caching with NGinx, clustering with node/express and socket is multi-core enabled using Redis. All that's to say, I've gone through a lot of trouble to try and make sure it can stand up to the load.
I hit it with 50,000 users in 10 seconds using blitz.io and it didn't even blink... Which concerned me because I wanted to see it crash, or at least breath a little heavy; but 50k was the max you could throw at it with that tool, indicating to me that they expect you to not reasonably be able to, or need to, handle more than that... Which is when I realized it wasn't actually incurring the load I was expecting because the load is initiated after the page loads and the Backbone app starts up and kicks off the socket connection and requests the data from the correct REST API endpoint (from different server).
So, here's my question:
How can I load test the entire app as a whole? I need the load test to tax the server in the same way that the clients actually will, which means:
Request the single page Backbone app from my NGinx/Node/Express server
Kick off requests for the static assets from NGinx (simulating what the browser would do)
Kick off requests to the REST API (PHP/MySQL running on a different server)
Create the connection to the Socket.io service (running on NGinx/Node/Express, utilizing Redis to handle multi-core junk)
If the testing tool uses a browser-like environment to load the page up, parsing the JS and running it, everything will be copasetic (NGinx/Node/Express server will get hit and so will the PHP/MySQL server). Otherwise, the testing tool will need to simulate this by firing off at least a dozen different kinds of requests nearly simultaneously. Otherwise it's like stress testing a door by looking at it 10,000 times (that is to say, it's pointless).
I need to ensure my app can handle 1,000 users hitting it in under a minute all loading the same page.
You should learn to use Apache JMeter http://jmeter.apache.org/
You can perform stress tests with it,
see this tutorial https://www.youtube.com/watch?v=8NLeq-QxkSw
As you said, "I need the load test to tax the server in the same way that the clients actually will"
That means that the tests is agnostic to the technology you are using.
I highly recommend Jmeter, is widely used and you can integrate it with Jenkins and do a lot of cool stuff with it.

Resources