I have a route that takes about a minute to run before a response is sent back (this is on purpose). The problem I ran into was when a user logged out after the request was made (via ajax) but before a response was returned, when the response was finally sent back, the user's session would be re-initialized, and stored back into Redis. This was leaving my Redis instance with a lot of extra entries that weren't useful anymore.
My solution was to listen for the req.close event, and when that happened, set a variable that prevented any response from being sent back at all (basically don't call next or res.end). This fixed my problem, but I'm wondering if it is bad to have an unresolved request. I don't actually care what the response is when the request is cancelled, since the user has navigated away from the page that made the request.
Related
I have a backend application and there are insert/update endpoints. They mostly go as:
Check and validate the input
Submit the input to db
Return status 200/201 with location header set and body contains status message
Would it be ok to make the 2nd step without await so that response can be returned faster? The returned status will be set to 202, which means it is currently processing. The possibility of 2nd step to throw error is extremely low, or if it does there is a bug going on somewhere and does not relate to the end-user anyways, hence no need to return such error to user.
Would this work? Even if this works, would it be a good practice?
It's OK to not wait for the database response to succeed before doing other things (like sending back a response) if that's how you want your app to work and you're sure that's the right design and you've thought through what happens (from the end user's point of view) if that database call (that you didn't wait for) fails.
BUT, it's not OK to ignore a rejected promise that might come back from the database call because unhandled promise rejections should not happen in a nodejs server. So, if you're not going to use an await, then you probably need a .catch() to catch and at least log the error.
Would this work?
Yes. There's nothing in the language or in nodejs stopping your from sending a response before the database call has completed. It's more about whether that's the appropriate way to design your response handler.
Even if this works, would it be a good practice?
It's not a widely recommended practice because the usual sequence is this:
Check and validate the input
Submit the input to db
If success, return status 200/201 with location header set and body contains status message
If error, return appropriate error status (based on type of error) and status message.
I'm not saying that you can never deviate from this sequence, but doing so would generally be the exception, not the rule.
Keep in mind that data validation does not necessarily catch everything that might cause the database to generate an error. For example, you could be creating a new user on your site and the email address is perfect valid in validation, but then the database rejects it because it's not unique (there's already a user with that email address).
And, the database itself could be having problems, causing an error and the user should be informed that the transaction they were trying to submit to the database did not happen event though the error was not directly caused by the user.
I have a route where a user can post information for external webpage they want to save. Like the page title, URL, and image url for the page- similar to pinterest.
When my server gets this information (title, url, and image url), my server does a GET request for the image url to the external site. When my server gets the image data back, it crops and stores it on AWS and stores the link in a new mongo document with the other web page info and sends the response with this document back to the client.
As my production web app grows sometimes these image requests cause a H13 error on heroku. Even with a 6 second timeout set on the image request, I'm still getting a "connection closed without response" error in production maybe because other parts of the request are long too or it's a particularly busy moment in time on the site.
I feel like getting the image in the request and waiting for it to load in order to send a response to the client is maybe the wrong way to go about this. Is there a way to handle this request that will work better while my web app scales?
Not sure if this question doesn't belong on stackoverflow because maybe it's opinionated. But I'm not sure if there is a standard way to go about this and I'm just a newb.
I have a question I have a web app that to run need to process some big file. It can take a 5secund so meantime I want to show user that file is processing or the best will be to send information how many time to end. This is the first page and I cannot send twice res.render so how to do this?
var fileinarray = require('./readfile');
app.get('/', function(req, res){
var dataready = fileinarray;
res.render('index', {data: dataready});
});
So how I can do his? I read a little about socke.io but I don't now how to used in my case?
Thank you for your help
If you are loading a page (which it looks like) with this request, then you can't show progress with the way you have it structured because the browser is waiting to download your page and thus can't show anything in that window until you render the page for it. But, you want to show progress before the page gets there.
So, you have to restructure the way things work. First off, in order to show progress, the browser has to have a working page that you can show progress in. That means the browser can't be waiting for the page to arrive. The most straightforward way I can think of to do this is to render a shell page initially that your server returns immediately (no waiting for the original content). In that shell page, you have code to show progress and you initiate a request to your server to fetch the long running data. There are several ways this could be done using Ajax and webSockets. I'll outline some combinations:
All with Ajax
Browser requests the / page and your server renders the shell page
Meanwhile, after rendering the page, the server starts the long running process of fetching the desired data.
Rendered inside the shell page is a Javascript variable that contains a transaction ID
Meanwhile, client-side Javascript can regularly send Ajax requests to the server to check on the progress of the given transaction id (say every 10 seconds). The server returns whatever useful progress info and client-side Javascript displays the progress info.
At some point the server is done with the data and one of the regular Ajax requests that was checking on progress returns with all the data. The client-side Javascript then inserts this data into the current page and the operation is complete.
Mostly with WebSocket
Browser requests the / page and your server renders the shell page
Client-side code inside the shell page makes a webSocket or socket.io connection to the server and sends an initial request for the data that belongs with this page.
The server receives the webSocket connection and the initial request for the data and starts the long running process of fetching the data.
As the server has meaningful progress information, it sends that to the client over the webSocket/socket.io connection and when the client receives that information, it renders appropriate progress in the page.
At some point the server is done fetching the data and sends the client a message containing the final data. The client-side Javascript receives this data and inserts it into the page.
The client can then close the webSocket/socket.io connection.
I'm making a POST request to a nodejs server that contains just a file name in the body. Upon receiving the request, node uploads the corresponding local file to an Amazon S3 bucket. The files can sometimes take awhile to upload, and sometimes the request times out. How can I lengthen or prevent the timeout from happening?
You can send the response back to the browser while you still continue to work on the upload. You don't have to wait until the upload is done before finishing the response. For example, you can do:
res.end("done"); // or put whatever HTML you want in the response
And, that will end the post response and the browser will be happy. Meanwhile, your server continues to finish the upload.
The upside to this is that the browser and user goes on their merry way to the next thing they want to do without waiting for the server to finish its business. The only downside I'm aware of is that if the upload fails, you don't have a simple means of informing the browser that it failed (because the response is already done).
The timeout itself is controlled by the browser. It is not something that you can directly control from the server. If you are continually sending data to the browser, then it will likely not timeout. So, as long as the upload was continuing, you could do something like this:
res.write(" ");
every 15 seconds or so as sort of a keep-alive that keeps the browser happy and keeps it from timing out. You'd probably set an interval timer for 15 seconds to do the small send of data and then when the upload finishes you would stop the timer.
If you control the Javascript in the browser that makes the post, then you can set the timeout value on an Ajax call from the Javascript in the browser. When you create an XMLHttpRequest object in the browser, it has a property called timeout which you can set for asynchronous ajax calls. See MDN for more info.
I get this question a lot =/
But I only know how to answer it at a very high level.
From the minute a user enters a URL and hits enter, what happens on the client and server side, and how do requests/responses work? How does the server interact with CGI/interpreters?
It would be helpful too if you could direct me to a URL that has this information in detail, or if you can answer it.
When I describe this to people I always feel like theyre looking for specifics and I'm not giving enough detail.
Thanks!
Client initiates communication (Usually a HTTP GET request)
Server receives REQUEST-HEADER and parses the URL contained within.
Server does a lookup to see if any URL matches locally in a harddrive-folder. If the webserver handles virtual servers like a Microsoft IIS, then it will determin which folder to search after retrieving the "www.domain.com" part from the REQUEST header.
If web-document (HTML file) is found, then Server sends this back as RESPONSE + a HTTP status code (eg. 200 saying: found, this request went well, where as 404 is "didnt find that file")
Client (browser) receives RESPONSE and can now display it as it wants. If it contains a render engine, then it will search for patterns (HTML tags or whatever language) and then display it as such.
This is also called "stateless" as the server closes communication with client after the client has received everything from the reponse-stream.
Therefore the server cannot know if the client is still connected nor if its comming back later. Many servers does provide a session object using cookies or similar to track if its the same client that sends the next REQUEST and if so, allowing more "intelligent" server responses - such as seeking, transactions and logins.
How does the internet work?
HTTP Made Really Easy
The Canonical Document: RFC 2616
The client sends request headers to the server (finds the IP via DNS).
The server software (e.g. Apache) calls CGI if it needs to and prepares the response.
It sends headers back as well as the content.