Request Timeout while uploading image - node.js

Am developing web application using golang for web server and frontend with reactJS and nodeJS to serve the frontend. I have two issue while uploading images that are big (currently am testing with 2.9 mb) the first one am getting is a timeout within 10 second saying request timeout at the browser side but the upload is successfully uploaded to the database. The second issue is the request is being duplicated two times and as a result the request is saved to the database two times. I have searched on stack overflow but it doesn't seem to work.
First Option
Here is the code using ajax call i.e. fetch from isomorphic-fetch
Following suggestion to implement a timeout wrapper at https://github.com/github/fetch/issues/175
static addEvent(events){
let config = {
method: 'POST',
body: events
};
function timeout(ms, promise) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject(new Error("timeout"))
}, ms)
promise.then(resolve, reject)
})
}
return timeout(120000,fetch(`${SERVER_HOSTNAME}:${SERVER_PORT}/event`, config))
.then(function(response){
if(response.status >= 400){
return {
"error":"Bad Response from Server"
};
}else if(response.ok){
browserHistory.push({
pathname: '/events'
});
}
});
}
The request timeout still occurs within 10 seconds.
Second Option
I have tried a different node module for the ajax call i.e. axios since it has a timeout option but this also didn't fix the timeout issue.
Third Option
I tried to set read timeout and write timeout on the server side similiar to https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
server := &http.Server{
Addr: ":9292",
Handler: router,
ReadTimeout: 180 * time.Second,
WriteTimeout: 180 * time.Second,
}
Again am getting request timeout at browser side within 10 seconds.
what shall I do to fix or point me if i made a mistake ?

Related

Firebase Functions timeout when querying AWS RDS PostgreSQL database

I am trying to query an Amazon RDS database from a Firebase Node JS cloud function. I built the query and can successfully run the code locally using firebase functions:shell. However, when I deploy the function and call it from client-side js on my site I receive errors on both the client and server side.
Client-side:
Error: internal
Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Fetch API cannot load https://us-central1-*****.cloudfunctions.net/query due to access control checks.
Failed to load resource: Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Server-side:
Function execution took 60004 ms, finished with status: 'timeout'
I believe the issue has two parts:
CORS
pool.query() is async
I have looked at multiple questions for a CORS solution, here and here for example, but none of the solutions have worked for me. In regards to pool.query() being async I believe I am handling it correctly however neither the result nor an error is printed to the servers logs.
Below is all the relevant code from my projects.
Client-side:
var queryRDS = firebase.functions().httpsCallable('query');
queryRDS({
query: document.getElementById("search-input").value
})
.then(function (result) {
if (result) {
console.log(result)
}
})
.catch(function (error) {
console.log(error);
});
Server-side:
const functions = require('firebase-functions');
const { Pool } = require('pg');
const pool = new Pool({
user: 'postgres',
host: '*****.*****.us-west-2.rds.amazonaws.com',
database: '*****',
password: '*****',
port: 5432
})
exports.query = functions.https.onCall((data, context) => {
// This is not my real query, I just changed it for the
// simplicity of this question
var query = "Select * FROM table"
pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log(err)
return err
})
})
I know everything works up until pool.query(), based on my logs it seems that the .then() or the .catch() are never reached and the returns never reach the client-side.
Update:
I increased the timeout of the Firebase Functions from 60s to 120s and changed my server function code by adding a return statment before pool.query():
return pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log("Failed to execute query: " + err)
return err
})
I now get an error message reading Failed to execute query: Error: connect ETIMEDOUT **.***.***.***:5432 with the IP address being my AWS RDS database. It seems this might have been the underlying problem all along, but I am not sure why the RDS is giving me a timeout.
The CORS should be automatically handled by the onCall handler. The error message about CORS is likely to be inaccurate, and a result of the function timing out, as the server side error is showing.
That being said, according to the Cloud Functions Documentation on Function's Timeout, the default timeout for Cloud Functions is of 60 seconds, which translated to the ~60000 ms on your error message, and this means that 1 minute is not enough for your function to execute such query, which makes sense if your consider that the function is accessing an external provider, which is the Amazon RDS database.
In order to fix it you will have to redeploy your function with a flag for setting the function execution timeout, as follows:
gcloud functions deploy FUNCTION_NAME --timeout=TIMEOUT
The Value of TIMEOUT could be anything until 540, which is the maximum seconds that Cloud Functions allows before timeout (9 minutes).
NOTE: This could also be mitigated by deploying your function to the closest location possible to where your Amazon RDS database is located, you can check this link on what locations are available for Cloud Functions and you can use --region=REGION on the deploy command to specify region to be deployed.

Angular 5 weird behavior on https calls

We are using angular 5 and nodejs for our application and we are facing a weird behavior in our production.We are sending a request call using httpclient from angular and when it gets a delayed response, it starts to retry the request in every 2 minutes as a default behavior.Once the request gets the response,it stops retrying the request.We are having a subscribe method inside another subscribe.
this._inspectionPageService.transferCheck(this.org)
.subscribe(status => {
if (status === true){
this._inspectionPageService.saveReceipts(this.genId, this.receiptData)
.subscribe(results => {
this.receiptInfo = results;
});
} else {
this._sharedServices.throwMessage('error', 'Failed.Please try Again');
}
});
Here saveReceipts call is taking more than 2 minutes to complete the request.
We are not using any retry method.Our interceptors also do not have any retry methods.This occurs strangely.
When we checked the log,we found the multiple entries in the difference of two minutes.We tried to replicate this issue in test instance,but unable to reproduce it.
Does anybody have idea on this?

axios get request Error: Request failed with status code 504

here is my code that makes an Http Get request to an API end point from one of the services running on Amazon Fargate service. The API is powered by Amazon API gateway and Lambda. Also this is a private api used with in the VPC and I also have setup the apigateway VPC end point to facilitate the same. I have received this error only once. All the subsequent calls made to the API were successful.
My suspicion is that the lambda was not warm and that resulted a timeout. I am going to try setting a timeout for the axios code. any suggestions welcome
async getItems(): Promise < any > {
try {
let url = `https://vpce-[id].execute-api.ap-southeast-2.vpce.amazonaws.com/prod/items`
const response = await axios.get(url, {
headers: {
'Authorization': `Bearer ${token}`,
'x-apigw-api-id': `[api-id]`
}
});
return response.data;
} catch(error) {
console.log(error);
throw error;
}
}
Turns out my lambda is timing out after the 30 seconds configured time. I could increase the lambda timeout, but the configurable timeout for API gateway is 30 seconds.
It has only happened once and i believe that it's because lambda cold start. As a workaround, I am taking the retry approach. The API request will be retried 3 times.

Getting net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 when consuming event-stream using EventSource in ReactJs

I have a very simple node service exposing an endpoint aimed to use Server Send Events (SSE) connection and a very basic ReactJs client consuming it via EventSource.onmessage.
Firstly, when I set a debug point in updateAmountState (Chrome Dev) I can't see it evoked.
Secondly, I am getting net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 (OK). According to https://github.com/aspnet/KestrelHttpServer/issues/1858 "ERR_INCOMPLETE_CHUNKED_ENCODING in chrome usually means that an uncaught exception was thrown from the application in the middle of writing to the response body". Then I checked the server side to see if I find any error. Well, I set break point in few places in server.js in both setTimeout(() => {... and I see it run periodically. I would expected each line to run once only. So it seems the front-end is trying permanently call the backend and getting some error.
The whole application, both front in ReactJs and the server in NodeJs can be found in https://github.com/jimisdrpc/hello-pocker-coins.
backend:
const http = require("http");
http
.createServer((request, response) => {
console.log("Requested url: " + request.url);
if (request.url.toLowerCase() === "/coins") {
response.writeHead(200, {
Connection: "keep-alive",
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache"
});
setTimeout(() => {
response.write('data: {"player": "Player1", "amount": "90"}');
response.write("\n\n");
}, 3000);
setTimeout(() => {
response.write('data: {"player": "Player2", "amount": "95"}');
response.write("\n\n");
}, 6000);
} else {
response.writeHead(404);
response.end();
}
})
.listen(5000, () => {
console.log("Server running at http://127.0.0.1:5000/");
});
frontend:
import React, { Component } from "react";
import ReactTable from "react-table";
import "react-table/react-table.css";
import { getInitialCoinsData } from "./DataProvider";
class App extends Component {
constructor(props) {
super(props);
this.state = {
data: getInitialCoinsData()
};
this.columns = [
{
Header: "Player",
accessor: "player"
},
{
Header: "Amount",
accessor: "amount"
}
];
this.eventSource = new EventSource("coins");
}
componentDidMount() {
this.eventSource.onmessage = e =>
this.updateAmountState(JSON.parse(e.data));
}
updateAmountState(amountState) {
let newData = this.state.data.map(item => {
if (item.amount === amountState.amount) {
item.state = amountState.state;
}
return item;
});
this.setState(Object.assign({}, { data: newData }));
}
render() {
return (
<div className="App">
<ReactTable data={this.state.data} columns={this.columns} />
</div>
);
}
}
export default App;
The exception I can see on chrome:
So my straight question is: why I am getting ERR_INCOMPLETE_CHUNKED_ENCODING 200? Am I missing something in the backend or in the frontend?
Some tips may help me:
Why do I see websocket in oending status since I am not using websocket at all? I know the basic difference (websocket is two-way, from front to back and from back to front and is a diferent protocol while SSE run over http and is only back to front). But it is not my intention to use websocket at all. (see blue line in printscreen belllow)
Why do I see eventsource with 0 bytes and 236 bytes both failled. I understand that eventsource is exactly what I am trying to use when I coded "this.eventSource = new EventSource("coins");". (see read line in printscreen bellow)
Very strange at least for me, some time when I kill the serve I could see updateAmountState methond evoked.
If call the localhost:5000/coins in browser I can see the server answers the response (both json strings). Can I assume that I coded properly the server and the erros is something exclusevely in the frontend?
Here are the answers to your questions.
The websocket you see running is not related to the code you have posted here. It may be related to another NPM package that you are using in your app. You might be able to figure out where it is coming from by looking at the headers in the network request.
The most likely cause of the eventsource requests failing is that they are timing out. The Chrome browser will kill an inactive stream after two minutes of inactivity. If you want to keep it alive, you need to add some code to send something from the server to the browser at least once every two minutes. Just to be safe, it is probably best to send something every minute. An example of what you need is below. It should do what you need if you add it after your second setTimeout in your server code.
const intervalId = setInterval(() => {
res.write(`data: keep connection alive\n\n`);
res.flush();
}, 60 * 1000);
req.on('close', () => {
// Make sure to clean up after yourself when the connection is closed
clearInterval(intervalId);
});
I'm not sure why you are sometimes seeing the updateAmountState method being invoked. If you are not seeing it consistently, it's probably not a major concern, but it might help to clean up the setTimeouts in the case that the server stops before they complete. You can do this by declaring them as variables and then passing the variable names to a clearTimeout in a close event handler similar to what I did for the interval in the example in #2 above.
Your code is set up properly, and the error you are seeing is due to the Chrome browser timeouts. Use something like the code in answer #2 above if you want to stop the errors from happening.
I'm not a Node.js expert myself, but it looks like you miss "'Connection': 'keep-alive'" and a "\n" after that - i.e.:
response es.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
});
response.write('\n');
see https://jasonbutz.info/2018/08/server-sent-events-with-node/. Hope it works!

Sometimes not receiving success or error response when saving Backbone model

When saving a model to a Node.js endpoint I'm not getting a success or error response every time, particularly on the first the first save and then sometimes on other attempts. The Node.js server is sending a success response every time, and if I use a Chrome rest client it works every time.
var mailchimpModel = new MailchimpModel();
var data = {
"email": $('#email').val()
}
mailchimpModel.save(data, {
success: function(model, response) {
console.log("success");
console.log(response);
},
error: function(model, response) {
console.log("error");
}
});
What I have found is the nodejs server is receiving 2 requests when it's failing
OPTIONS /api/mailchimp 200
POST /api/mailchimp 200
and I only get a success response if I submit the request again straight afterwards.
It's possible your model is failing client-side validation. To check, try:
console.log(mailchimpModel.save(data));
If the value is false then your model is failing client-side validation (usually defined in a validate function in the model). You can check the errors with
console.log(mailchimpModel.valdiationError);
OK found that i need to handle the OPTIONS method on the server, using the soltion on this post worked for me.
https://stackoverflow.com/a/13148080/10644

Resources