Ok I have a WAMP installation with cakePHP under windows xp 64 bit. I am using a websocket PHP pluging with the latest node 8.17 and socket io version .9.13. None of my co workers seem to know what the problem is and i have been stuck for two weeks. I was able to narrow down the problem, but I have no clue on how to fix it.
After my cakephp plugin makes the request to a socket io server i can capture the authorization handshake request however according to the socket io protocol the response body should contain the handhshake id, the heartbeat interval, the timeout interval etc... Sometimes I would get the proper response however the majority of the times(like 90 %) of the time I would get a null body response but the headers return 200 ok response which throws an error in my application. Is there a way I can get consistant results. I am more that happy to post debug information so you can see what I am talking about. I read somewhere that it might be gzip compression problem but with the socket io update i believe that has been fixed.
Any help would be much appreciated!!!
Problem was with line 80 of mgcrea/cake_websocket plugin. For some reason although it was issuing the correct request it was receiving at intermittent response. When i overrode it it then solved the problem.
Related
I have built a simple Python/Flask app for sending automatic messages in Slack and Telegram after receiving a post request in the form of:
response = requests.post(url='https://my-cool-endpoint.a.run.app/my-app/api/v1.0/',
json={'message': msg_body, 'urgency': urgency, 'app_name': app_name},
auth=(username, password))
, or even a similar curl request. It works well in localhost, as well as a containerized application. However, after deploying it to Cloud Run, the requests keep resulting in the following 503 Error:
POST 503 663 B 10.1 s python-requests/2.24.0 The request failed because either the HTTP response was malformed or connection to the instance had an error.
Does it have anything to do with a Flask timeout or something like that? I really don't understand what is happening, because the response doesn't take (and shouldn't) take more than a few seconds (usually less than 5s).
Thank you all.
--EDIT
Problem solved after thinking about AhmetB reply. I've found that I was setting the host as the public ip address of the SQL instance, and that is not the case when you post it to Cloud Run. For that to work out, you must replace host by unix_socket and then set its path.
Thanks you all! This question is closed.
Encountered a very weird issue.
I have two VMs, running CentOS Linux.
Server side has a REST API (Using none-Poco socket), and one of the API is to response a POST.
On the client side, use POCO library to call the REST.
If the returned message is long, it will be truncated at 176 k, or 240 k, or 288 k.
Same code, same environment, running on server side, Good.
On the client VM, use python to do the REST call, Good.
ONLY failed if I use the same good code, on client VM.
When msg got truncated, the https status code always return 200
On the server side, I logged the response message that I sent every time. Everything looks normal.
I have tried whole bunch of things, like:
set the socket timeout and receiveResponse timeout to an hour
wait for 2 seconds after I send the request but before I call the receive
Set the receive buffer big enough
Try whole bunch of approach to make sure receive stream is empty, no more data
It just does not work.
Anyone have similar issue? I started pulling my hair.... Please talk to me, anything... before I am bold.
I have build a node.js app for establishing connections with keep-alive, event-stream post request with some external server. I send 400 of them and keep acting on received data using data event from request node package. I also listen to the end, response and error events.
When I run the application from localhost everything works perfectly according to plan. However, when I push it to Openshift, only first 5 requests work as intended, the rest just... disappears. I don't get any error, I don't get any response, nor end. I tried sending the requests in with some delay between them, I tried looking for information about maximum requests, I debugged it thoroughly - nothing works. Does anybody have an idea, basing on this description of the problem, how to make all 400 request work (or have an answer why they won't)?
I found the solution for that problem myself. It appeared, that the Request.js library couldn't establish more connections then 5 due to the default agent.maxSockets property of the http server set to that number in 0.10 version of Node.js. Although I had not been requiring http package directly, the Request.js library used it. All I had to do to fix that was to put the following code in the main application module:
var http = require('http');
http.globalAgent.maxSockets = 500; //Or any other number you want
Btw. the default value has been changed to infinity already in the 0.12 version of Node.js.
I have an API application built with node and expressjs. Using Apache for reverse proxy, with keepalive enabled.
Some requests (specifically POST/PUT), will end up hanging for 2 minutes due to the default 2 minute socket timeout. Doesn't happen always, but often. As soon as the timeout is hit, the Client then gets the response and continues sending other requests.
It seems to be due to Keep-Alive, although I'm not 100% sure.
Adding the header:
res.set('Connection', 'close');
Makes the problem go away, that's why I think this is related to keep-alive.
Been researching the issue for 2 days with no success.
Is it worth setting the header and accepting the consequences, or is there any other solution/explanation to this behaviour?
Turns out this was all caused by a "204 - No Content" response on DELETE requests sent before the PUT/POST requests. Changing from
res.send(data)
To
res.status(204).end()
Fixed the issue for me.
I am running node.js/socket.io server.
I am testing with IE 9.
When I open dev tool in IE 9, many of the socket.io request are displaying their result as "Pending..". Although I see they have returned the result which I want.
I am not sure if this default behavior or a bad thing which might cause my browser slow.
Any help is appreciated.
If you're doing long polling as your socket transport (xhr, jsonp, etc...) having pending http requests is expected. The socket client should open an http GET to the server and the server should keep it open until there is data for the socket or the http interval expires. When either of those cases happen the client re-opens a GET and starts listening again. So in practice there should always be one pending http request. That said I see that there are multiple in your screenshot. I'm curious if its a tools issue. Have you tried your test above in chrome or firefox? Do you get the same result? Is socket communication working as expected apart from what's being displayed there?