Encountered a very weird issue.
I have two VMs, running CentOS Linux.
Server side has a REST API (Using none-Poco socket), and one of the API is to response a POST.
On the client side, use POCO library to call the REST.
If the returned message is long, it will be truncated at 176 k, or 240 k, or 288 k.
Same code, same environment, running on server side, Good.
On the client VM, use python to do the REST call, Good.
ONLY failed if I use the same good code, on client VM.
When msg got truncated, the https status code always return 200
On the server side, I logged the response message that I sent every time. Everything looks normal.
I have tried whole bunch of things, like:
set the socket timeout and receiveResponse timeout to an hour
wait for 2 seconds after I send the request but before I call the receive
Set the receive buffer big enough
Try whole bunch of approach to make sure receive stream is empty, no more data
It just does not work.
Anyone have similar issue? I started pulling my hair.... Please talk to me, anything... before I am bold.
Related
Currently working on a microservice architecture where I have one particular microservice that will have a specific mechanism :
Receiving a Request saying it needs some data
Sending Status 202 - Accepted To Client
Generating Data and Saving it to a redis instance
Receiving a Request to see if data is ready
Data is not ready in redis instance : Sending status 102 To Client
Data is ready in redis instance : sending it back
The first point works fine with this kind of code :
res.sendStatus(202)
processData(req)
But I have different behavior Locally and when hosted on Cloud Run for the second point.
Locally, the 2nd request is not handled while the first one process is not ended and I presumed it was normal on a threading perspective.
Is there something that might be used to make express still handle the other request while the first one is sent to the client but the process is not ended ?
But considering that Google Cloud Run is based on instances and auto-scaling, I thought that well, the first one is locked because the process is not ended ? No problem ! A new one will come and handle the other request that will then check the redis instance key status.
It seems that I was wrong ! When I do the call to check the status of the data, if the data is not yet done, Cloud run send me back this error (502 Gateway) :
upstream connect error or disconnect/reset before headers. reset reason: protocol error
However, I don't have any res status to 502 so it seems that either Cloud Run or Express send this itself.
My only option would be to split my Cloud Run instance into a Cloud Function + a Cloud Run. The Cloud run would trigger the process in a Cloud Function but I'm pretty short on time so if I don't have any other option I will have to do that but I would hope to be able to manage it without introducing a new Cloud Function
Do you have any explanation about the fact that id doesn't worky locally and on Cloud run ?
My considerations are not convincing me and I don't find any truth :
Maybe a client can't do 2 request at the same time : Which seems not logical
Maybe express can't handle several request at the same time : Which does not seems logical to me
Any clues that seems more plausible ?
I have this script where I'm taking a large dataset and calling a remote api, using request-promise, using a post method. If I do this individually, the request works just fine. However, if I loop through a sample set of 200-records using forEach and async/await, only about 6-15 of the requests come back with a status of 200, the others are returning with a 500 error.
I've worked with the owner of the API, and their logs only show the 200-requests. So I don't think node is actually sending out the ones that come back as 500.
Has anyone run into this, and/or know how I can get around this?
To my knowledge, there's no code in node.js that automatically makes a 500 http response for you. Those 500 responses are apparently coming from the target server's network. You could look at a network trace on your server machine to see for sure.
If they are not in the target server logs, then it's probably coming from some defense mechanism deployed in front of their server to stop misuse or overuse of their server (such as rate limiting from one source) and/or to protect its ability to respond to a meaningful number of requests (proxy, firewall, load balancer, etc...). It could even be part of a configuration in the hosting facility.
You will likely need to find out how many simultaneous requests the target server will accept without error and then modify your code to never send more than that number of requests at once. They could also be measuring requests/sec to it might not only be an in-flight count, but could be the rate at which requests are sent.
I want that all requests to my server will get response in 2 seconds.
If my server have an issue (for example: it's turned off), the user will get an error response after 2 seconds.
The status now is that if there is an issue in my server, the user and browser, try for long time to connect. I don't want this.
Currently I am not using any load-balance or CDN.
Sometimes my server fall down. I don't want my users to wait forever for response, and hangout the browser.
I Think that load balance service OR CDN can help.
What I want it that after 2 seconds, the service before my server will return default error message.
Which service can handle it for me?
I checked out CloudFront and CloudFlare, and didn't found something like that.
More info:
1. Cache cannot help, because my server return different results for every request.
2. I cannot use async code.
Thank you.
You can't configure 2 second timeout in CloudFront, however you can configure it return some error page (which you might host anywhere outside of your server) if server is not responding properly.
Take a look here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
Moreover, these error responses are cached (you can specify for how long they will be cached), so next users will get errors right away
I'm trying to implement WebSocket type of functionality, but using only plain http. So there's a connection over GET that gets data from server, and a connection with POST that sends data to server. Both connections send/receive streaming data, and use content length = 1 GB, type = application octet stream. The two connections remain open till the server sends an "end" command over the GET connection.
Without ARR things work (the back-end server is not IIS).
I added ARR to the setup, did all the configuration steps (including response buffer threshold = 0, max request size = 2GB). Hitting normal urls, submitting forms from browser over ARR works perfectly.
However the POST connection to send data to server does not work. The client sends the data, but the back-end server does not get anything (verified using wireshark). It remains hung for a while, if I terminate the client then ARR failure trace log is generated, and it shows 400 error response, this seems to be because the actual amount of data was not 1 GB in length.
The GET connection works. (If I point the POST connection to directly talk to the back-end server, things work)
So my question is: Is it possible to send streaming data as POST request via ARR? Any suggestions on how to achieve it?
Thanks
I have an iphone app using ASIHttpRequest. The server code is on heroku in node.js
From time to time, a single request is sent from the iphone (only one trace) app but it is received twice on herokuapp (I can see twice the same request in the heroku logs).
I though at the beginning the request was requested twice because of an error in the first attempt but it's not the case as both request (the one I need and the second one I don't need) are performed on server side.
Any idea ?
Are you starting the queue with accurate progress turned on? If so, ASIHTTP makes one request (HEAD) to get the total size of the data to be downloaded, then it makes the real request. Hope that helps.
If that's not the case, try setting the persistent connection to NO, like so:
[asiRequest setShouldAttemptPersistentConnection:NO];
From my understanding, the latest version of ASIHTTPRequest defaults the persistent connection to NO. You can read more here:
https://github.com/pokeb/asi-http-request/issues/94