Request module throwing 403 without trying - node.js

I have a really weird problem.
I have two node.js servers running express, say A and B. I use request to send request from one A to B. Sometimes, the request module just throws 403 at me Forbidden file type or location without even sending a request to B.
I have multiple servers running same code, only one of the servers have this issue. Everyone else properly sends the request and shows response.
I did a tcpdump, no entries for communication between A and B.

This was caused by HTTP_PROXY settings I had in my systemd configuration.

Related

Error 503 on Cloud Run for a simple messaging application

I have built a simple Python/Flask app for sending automatic messages in Slack and Telegram after receiving a post request in the form of:
response = requests.post(url='https://my-cool-endpoint.a.run.app/my-app/api/v1.0/',
json={'message': msg_body, 'urgency': urgency, 'app_name': app_name},
auth=(username, password))
, or even a similar curl request. It works well in localhost, as well as a containerized application. However, after deploying it to Cloud Run, the requests keep resulting in the following 503 Error:
POST 503 663 B 10.1 s python-requests/2.24.0 The request failed because either the HTTP response was malformed or connection to the instance had an error.
Does it have anything to do with a Flask timeout or something like that? I really don't understand what is happening, because the response doesn't take (and shouldn't) take more than a few seconds (usually less than 5s).
Thank you all.
--EDIT
Problem solved after thinking about AhmetB reply. I've found that I was setting the host as the public ip address of the SQL instance, and that is not the case when you post it to Cloud Run. For that to work out, you must replace host by unix_socket and then set its path.
Thanks you all! This question is closed.

ForEach Bulk Post Requests are failing

I have this script where I'm taking a large dataset and calling a remote api, using request-promise, using a post method. If I do this individually, the request works just fine. However, if I loop through a sample set of 200-records using forEach and async/await, only about 6-15 of the requests come back with a status of 200, the others are returning with a 500 error.
I've worked with the owner of the API, and their logs only show the 200-requests. So I don't think node is actually sending out the ones that come back as 500.
Has anyone run into this, and/or know how I can get around this?
To my knowledge, there's no code in node.js that automatically makes a 500 http response for you. Those 500 responses are apparently coming from the target server's network. You could look at a network trace on your server machine to see for sure.
If they are not in the target server logs, then it's probably coming from some defense mechanism deployed in front of their server to stop misuse or overuse of their server (such as rate limiting from one source) and/or to protect its ability to respond to a meaningful number of requests (proxy, firewall, load balancer, etc...). It could even be part of a configuration in the hosting facility.
You will likely need to find out how many simultaneous requests the target server will accept without error and then modify your code to never send more than that number of requests at once. They could also be measuring requests/sec to it might not only be an in-flight count, but could be the rate at which requests are sent.

Requests being doubled if Tomcat is slow to respond

We are working with the following stack:
A node express middleware running on Nginx is communicating with an Apache, which proxies the requests to Tomcat, that are located on another server. Now, when requesting an operation that takes more than 15 seconds to complete, another identical request will be sent. There is obviously a 15-second retry policy somewhere.
So far, I have been unable to detect exactly who is doing this and my Google searches have also been fruitless. So, my question is if anyone has experience with something like this and could it be Node, Nginx or Apache that is sending the second request.
Any suggestions on where the double requests are coming from and what property I need to adjust to turn them off would be greatly appreciated.
The solution was to set the socket timeout property in apache's jk_mod to 0.

HTML request using a PLC

I am trying to make a get/post request to a multi-purpose modem for web interface and gsm interface using a PLC(Programmable Logic Controller).
I have been trying to send string data to the modem through the TCP library currently with Schneider Somachine. Every time I make a request, I receive an Error 400 bad request. I am hoping that my program is correct as I can receive an error statement via the modem, but am not sure about the request I need to make in order to receive a positive OK response from the controller.
I have tried making the following requests and all returned with an Error 400 bad request.
GET https://192.168.2.1
GET https://192.168.2.1/api/login?username=admin&password=admin
I have also tried the above without the GET statement and with POST statements as well.
The above requests were done with carriage return and new line characters in the end.
I would really appreciate if someone could help out with the request type that has to be made in order to get that response.
As far as I know, accessing PLC through ajax is not a routine operation. If you can, you can try LECPServer, an open source middleware. It can expose the PLC node address for reading and writing through HTTP POST.
https://github.com/xeden3/LECPServer
Your requests are malformed (that's why you get the 400 response).
It should look like:
GET /path/to/resource/index.html HTTP/1.0
The server (192.168.2.1) and the transport (http vs https) have already been taken care of by the connection. All you're trying to do is tell the device what you want to do. In this case you want to access the login page with your credentials. You also need to specify which version of the protocol to use.
Get /api/login?username=admin&password=admin HTTP/1.0

Sending multiple http requests on Openshift, finalized only few

I have build a node.js app for establishing connections with keep-alive, event-stream post request with some external server. I send 400 of them and keep acting on received data using data event from request node package. I also listen to the end, response and error events.
When I run the application from localhost everything works perfectly according to plan. However, when I push it to Openshift, only first 5 requests work as intended, the rest just... disappears. I don't get any error, I don't get any response, nor end. I tried sending the requests in with some delay between them, I tried looking for information about maximum requests, I debugged it thoroughly - nothing works. Does anybody have an idea, basing on this description of the problem, how to make all 400 request work (or have an answer why they won't)?
I found the solution for that problem myself. It appeared, that the Request.js library couldn't establish more connections then 5 due to the default agent.maxSockets property of the http server set to that number in 0.10 version of Node.js. Although I had not been requiring http package directly, the Request.js library used it. All I had to do to fix that was to put the following code in the main application module:
var http = require('http');
http.globalAgent.maxSockets = 500; //Or any other number you want
Btw. the default value has been changed to infinity already in the 0.12 version of Node.js.

Resources