meaning of 104 error on Locust - iis

I am a confused that I got the following error which means client and/or server closes the connection. I am using Locust and server responding stable and time is not much, but sometime I got this error from IIS.
With the error, my number of users is decreased one or keeping stable? How can I check the connected user? What kind of problem will this error cause?
ConnectionError(ProtocolError('Connection aborted.', error(104, 'Connection reset by peer')),)

Related

Nodejs app disconnects from mongodb randomly

I have a nodejs/ExpressJS/Mongodb/Mongoose app hosted on aws elasticbeanstalk.
The problem is elasticbeanstalk health degrades randomly ( no specific times ), that happens because any request that requires database interaction results the following in logs:
*1360931 upstream timed out (110: Connection timed out) while reading response header from upstream
This happens no matter how much data I try to load. it happens with the least amount of data, this can last from a minute up to 20 minutes and it works again on its own, it is completely random.
And I can force it to work immediately by restarting the environment ( I connect to mongodb using connection string on app startup ).
While other requests that don't require database interaction work 100%.
The thing is while database queries aren't working , I can connect to the same database from localhost and database requests work like a charm, they even work really fast.
What is even more strange is I have 4 other identical apps with the same setup, and This situation doesn't occur with any of them, only this app faces this problem !
What is the problem here ?
The above error usually means that your server closed the connection due to shorter timeout , but your application is not aware about it. You may need to check your connection string and modify the timeouts maybe decrease them , example connection string:
MONGO_URI=mongodb://user:password#127.0.0.1:27017/dbname?keepAlive=true&poolSize=30&autoReconnect=true&socketTimeoutMS=360000&connectTimeoutMS=360000

How do I fix Connect Document error in Tinylicious server?

Sometimes when I restart my Tinylicious server, I receive the following error many times in the server logs.
Connect Document error: {} {"messageMetaData":{"documentId":"1608426861167","tenantId":"tinylicious"},"label":"winston","timestamp":"2020-12-20T21:20:50.591Z"}
What exactly does this error mean and how can I fix it?
This error is created when we fail to make a socketio connection. You can see our code to create the connection here and the error message you're seeing here.
I don't believe this error will cause you significant issues, but it shouldn't be there. There's an issue on the project's github that you can add to if you're looking for more information.

Error "close (transport close)" on Socket client side

In my express/socket app, (which is running behind HAproxy server), I am using sticky session(cookie based) to route requests to same worker. I have total 16 processes running (8 /machine- 2 machines). Socket session data is being stored in Redis adapter.
The problem I have is, when an event is fired from server, client can't receive it. Inspite, it will keep throwing disconnection errors, after every few seconds (4-5) :
Update : It will only fire event if transport was opened when event was fired, which is getting closed instantly, and than restarting.
Can someone please suggest something on this..
Finally, I found the solution. It was timeout client which was set to too low in HAproxy config. Increasing it, fixed the issue.

how to detect connection failed in node js?

I use net.connect to make socket connection, I wonder how to detect it when a connection has failed?
It seems this doesn't work
//this will return a net.Socket and automatically connect
var client = net.connect({port:22000, host:'10.123.9.163'});
//doesn't trigger a error event even if connection fails
client.on('error', (err)=>{console.log('something wrong')});
//now an error event is emitted reasonably
client.write('hello');
when I run this piece of code, the connection should fail, and it indeed fails because when I write some data, an error occurs. But, I can not detect the connection failure. How can I do that?
=====Ready to close======
God damn it, I think I have just make a mistake. In fact the connection succeeded but due to some security strategy the server close the connection, I find out by doing a telnet. After trying other port which should definitely fail, the error event is emitted, everything go normal as expected. So, I am gonna close this question in case of misleading other people, and also thank you guys for helping me :)
The easiest and most portable way is to simply implement a 'ping-pong' check where both sides send some kind of 'ping' request every so often. If n outstanding ping requests go unanswered after some period of time, then consider the connection dead. This kind of algorithm is basically what OpenSSH uses for example (although it's not enabled by default).

Unexplained Node.js 504s

We're running Node (v0.10.38) with Express (4.0.0), proxied through nginx (1.2.1), which usually works great. Recently, however, we switched to a new server setup. Now, roughly 30 minutes after starting up the server, the server starts returning 504s (Gateway Timeout). Accessing Node directly from the server (bypassing nginx) also times out. Every so often, we got a series of ETIMEDOUT errors from redis, but connecting to the redis server from the server works from the command line. Furthermore, the server started returning 504s even before redis errors came up anyways. Anyways, after updating our redis middleware (connect-redis) to the newest version, these errors stopped, but the 504s still occurred. However, after disabling the connection to redis in our code for 10 hours, no 504 occurred. We've tried sending a redis ping periodically to prevent the error, believing that to be the cause, but 504s continue. When not connecting to redis, the server doesn't 504, so it is likely tied to redis in some way. Anything else we can try?
Sorry if there's not much to work with. We don't have that much either, and are eager to solve this issue as soon as possible. If there's any more specifics needed, I can update the question. Thank you.
Still don't know the root cause, but we ended up fixing this by pinging Redis every minute so that the connection wouldn't get killed.

Resources