I have two nodejs files. One sends a post request (using axios) from an AWS EC2 instance and one receives it (using express) on my PC. For some reason, my PC nodejs server isn't receiving the post requests.
I have this code for my receiver:
app.post('/', function (req, res) {
console.log(req.body)
res.end("Request received!")
}
And nothing is logged.
The code is probably not the issue: in all the other cases the receiver logs the request body and the sender logs the response, and the receiving server handles it correctly (putting it in a mongodb).
The cases when it works are:
Sending from my PC, Receiving in my PC
Sending from AWS, Receiving in AWS (different instances)
Sending from my PC, Receiving in AWS
I thought it might be blocked by one of the two ends, so I used security groups and opened all the ports (incoming and outcoming) on the AWS instances, and I went on the firewall on my computer and made rules that opened the needed ports, and it still doesn't work.
I also added these headers to the axios request:
const headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "X-Requested-With",
"Content-Type":"application/json"
}
axios.post('http://(myip):3001', {arr:inf}, {headers:header}).then((res)=>{...
but it still doesn't work. Any suggestions?
EDIT: 20 or so seconds after I send the request, the node server that sends it times out, returning ETIMEDOUT
You mentioned below three cases :
Sending from my PC, Receiving in my PC - fine
Sending from AWS, Receiving in AWS (different instances) - fine
Sending from my PC, Receiving in AWS - Fine, as you must be using Public IP of the EC2 instance to connection from your PC
But sending from EC2 to your PC wont work until and unless your PC has a public IP. How will request from EC2 intance reach your PC ?
Related
I am attempting to perform a token request to my identity server by posting the details to my existing service. Upon reaching the post the service stalls for anywhere up to a minute (longest recorded was 58 seconds).
I have tried testing on both native hardware (Mint Cinnamon 19.1) and Virtual Machine (Ubuntu 18.04) with the same result. This problem does not occur on Windows or Mac.
Thinking it may have been some of the wrapped functions slowing things down, I have separated them out with some log lines. All lines up to and including
Submitting payload via POST
are printed almost instantly.
console.log('Sending token request ');
console.log('Fetching string map');
const stringMap = request.toStringMap();
console.log('Beginning stringification');
const payload = this.utils.stringify(stringMap);
console.log('Submitting payload via POST');
const response = await axios.post(
configuration.tokenEndpoint,
payload,
{
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
},
responseType: 'json'
});
console.log('Resolving \'performTokenRequest\'');
On Windows and Mac this request is sent and resolved within 300ms which is expected. However, on Linux (tested on Mint 19.1 native and Ubuntu 18.04 VM), this request stalls (not sent to the service, checked via debug attachment on the server side) for up to one minute. The timing seems arbitrary, sometimes it's very long, sometimes only ten or so seconds.
Once the message is sent the auth service it is processed and responds within 100ms or so, and the rest of the code continues processing in a speedy manner.
I cannot figure out why this would only occur on Linux and not anything else. I have tried searching the issues list for axios but I can't see anything. I was wondering if it were an OS setup problem, or something I can configure within axios that I've forgotten.
I have a nodejs websocket AWS lambda endpoint (Api Gateway) set up and it connects and can echo messages back. During initial connection, I save the endpoint and connection_id to a database. That gets saved just fine. If I open a browser client, and connect to the websocket endpoint, I can connect successfully, and send a message from the browser successfully - I have code to echo the message back, and it works.
Now, in another nodejs lambda, one that provides a REST endpoint, I have code that loads the connection_id from the database and does this:
// 'connection' is loaded successfully from DB (I log it and see the right values)
let api = new AWSSDK.ApiGatewayManagementApi({apiVersion: '2018-11-29', endpoint: connection.endpoint});
await api.postToConnection({
ConnectionId: connection.connection_id,
Data: JSON.stringify({ message: 'Hello World' })
}).promise();
However, the code in the REST endpoint (code above) always gets a 410 error in the postToConnection. I know the connection is still active, since I can connect to it and ping it in a browser client just prior to testing the REST API above.
Is it not possible to post to a websocket connection from a non-websocket lambda?
I think you should use this to send messages from the backend to your clients.
To send a callback message to the client, use:
POST
https://{api-id}.execute-api.us-east-1.amazonaws.com/{stage}/#connections/{connection_id}
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-how-to-call-websocket-api-connections.html
I have an IoT (Arduino) device that sends HTTP POST request every 6 seconds to my NodeJs server.
The data from the device is sent in JSON format.
At some point, the device stops sending data (let's say the GSM module got turned off or maybe the device got turned off for some reason) and I need to determine when that happens and output something like: 'device status: offline'
I use ExpressJs to handle the server functionality and I can easily handle any POST/GET request from/to the IoT device but how to determine when it's no longer sending POST requests??
tried to check if the IP address of the IoT device (it uses GSM module with a SIM card to send data to my NodeJS server) is reachable but I'm new to Node and I use body-parser so the request is automatically parsed to JSON and I'm not sure how to determine if the IP is reachable.
setDeviceStatus() is a function that updates the device status (online/offline) in a cache file/table in my database
app.post('/floraData', (req,res,next) => {
var jsonObj = req.body;
var deviceStatusTimer = setInterval(function(){
if(jsonObj == null){
setDeviceStatus(0);
console.log("device offline");
}
else{
setDeviceStatus(1);
console.log("device is online");
}
}, 10000);
next();
});
WebSocket will be an easier approach. As #Gonzalo.- pointed out in the comment, you can send ping request from the server to health check the IoT device.
However, WebSocket has its limitation that it can hardly scale out if you have a lot of IoT devices. If so, building a stateless application, like what you are doing now, will be more suitable in long run.
Answering your question, instead of using the IP (multiple devices can share the same IP) to health check the IoT device, the device can send the device_id (e.g. MAC address, custom UUID, etc.) in the POST request. Whenever the server receives the POST request, it should store the device_id in Redis, for example, with the last_online timestamp.
Depending on your use case, you can build another application (e.g. lambda + cloudwatch) to check the Redis periodically for any devices which the last_online time was X minutes ago.
We are currently building a desktop application in node-webkit and we need to send http requests to a remote server. For this we decided to use request, a http wrapper module for node.
This works fine on all but one of our machines. The code for the download looks a bit like this:
var options = {
url: url
};
request.post(options
, function (error, response, body)
{
if (!error && response.statusCode == 200)
{
cb && cb(null, body);
}
}
).on('error', function (err)
{
}).pipe(writeStream);
So with this the result we get on my machine is this:
On our network here the proxy server is 172.24.8.14 and my address is 172.24.9.130. Node sent the request through the proxy server wich contacted the target server. The result that is sent back is a 301 which is expected.
...And on the other machine:
This time Node attempted to send the request directly to the target server. This resulted in the proxy blocking the request completely.
The strange thing is that we do not specify a proxy in our code however the requests do seem to go through the proxy...but not on the other machine.
Is there some reason for this? How is node somehow detecting the proxy and sending the request to the proxy?
The reason for this turned out to be that our network was using an NTLM proxy which required ISA client to be running on our machines but it was not running on the other machine. Installing ISA client on that machine allowed traffic to go through the proxy as normal.
I am try to make a simple GET request in Node.js and the request takes 3-5 seconds to resolve whereas the same request in a browser or REST client takes ~400ms. The server to which I am making the request is controlled by our server team, but before I bother them with request/resource monitoring, I was going to ping the community to see if there were any "hey, check this setting first" kind of tips you guys could offer.
The code essentially forwards incoming requests to our server:
http.createServer(function (req, res) {
http.request({
host: "our.private.host",
port: 8080,
path: req.url,
headers: req.headers
}, function () {
res.end("DONE: " + Date.now());
}).end();
}).listen(8001);
I open my browser and type in the following URL:
http://localhost:8001/path/to/some/resource
... which gets forwarded on to the final destination:
http://our.private.host:8080/path/to/some/resource
Everything is working fine and I am getting the response I want, but it takes 3-5 seconds to resolve. If I paste the final destination URL directly in the browser or a REST client, it resolves quickly. I don't know much about our server, unfortunately - but I am looking more for node tips at this point. Note, the request pool isn't maxed out as I am only making 1 request at a time from my local machine.
The first step is gather some info on where the request is taking its time by looking at the exact timing of the network activity on your node server. You can do that by getting a tool that watches all network activity. I personally use Fiddler, but I know that WireShark is popular too.
Once that tools is installed and active, you can then see how long all these various steps in the process of your request are taking:
DNS request to resolve target IP address
Time to connect to the target server
Time to send the http request
Time to receive the http request
Time to send response back to original request
Understanding which of these operations is much longer than expected will give you an idea where to look further for the problem.
FYI, there are pre-built tools such as nginx that can do this type of proxying by just setting some values in a configuration file without any custom coding.