I'd like to know how does NodeJS process multiple GET requests from different users/browsers which have event emitted to return the results? I'd like to think of it as each time a user executes the GET request, it's as if a new session is started for that user.
For example if I have this GET request
var tester = require('./tester-class');
app.get('/triggerEv', async function(req, res, next) {
// Start the data processing
tester.startProcessing('some-data');
// tester has event emitters that are triggered when processing is complete (success or fail)
tester.on('success', function(data) {
return res.send('success');
}
tester.on('fail', function(data) {
return res.send('fail');
}
}
What I'm thinking is that if I open a browser and run this GET request by passing some-data and start processing. Then open another browser to execute this GET request with different data (to simulate multiple users accessing it at the same time), it will overwrite the previous startProcessing function and rerun it again with the new data.
So if multiple users execute this GET request at the same time, would it handle it separately for each user as if it was different and independent sessions then return when there's a response for each user's sessions? Or will it do as I mentioned above (this case I will have to somehow manage different sessions for each user that triggers this GET request)?
I want to make it so that each user that executes this GET request doesn't interfere with other users that also execute this GET request at the same time and the correct response is returned for each user based on their own data sent to the startProcessing function.
Thanks, I hope I'm making sense. Will clarify if not.
If you're sharing the global tester object among different requests, then the 2nd request will interfere with the first request. Since all incoming requests use the same global environment in node.js, the usual model is that any request that may be "in-flight" for awhile needs to create its own resources and keep them for itself. Then, if some other request arrives while the first one is still waiting for something to complete, then it will also create its own resources and the two will not conflict.
The server environment does not have a concept of "sessions" in the way you're using the term. There is no separate server-session or server state that each request lives in other than the request and response objects that are created for each incoming request. This is not like PHP - there is not a whole new interpreter state for each request.
I want to make it so that each user that executes this GET request doesn't interfere with other users that also execute this GET request at the same time and the correct response is returned for each user based on their own data sent to the startProcessing function.
Then, don't share any resources between requests and don't use any objects that have global state. I don't know what your tester is, but one way to keep multiple requests separate from each other is to just make a new tester object for each request so they can each use it to their heart's content without any conflict.
Related
My nestjs + React app has a Google oauth flow loosely based on this process. One thing that the Google library tries to help with is to take a refresh_token (that you've likely stored in your app's db) and use it to automatically retrieve a new access_token if the old one is expired. When it does this refresh, it emits a 'tokens' signal, and in my code I need something like
oauth2Client.on('tokens', async (tokens) => {
if (tokens.refresh_token) {
// store the refresh_token in your secure persistent database
console.log(tokens.refresh_token);
}
console.log(tokens.access_token);
});
It appears that the Google library intentionally does not let you proactively make a call to do the token refresh. The refresh happens automatically when you've set a refresh_token on the oauth2 client object and use that client object to next make any Google API call where the previous access_token has expired.
What I'm finding tricky is that when the above listener runs, I ideally would be able to get the 'current user' whose initial client session is what led to this server code path running. I can certainly create a chain of events like
User is logged into my app on the client
User does something on the frontend
A call to the server is made that has #UseGuards(AuthGuard()) and where I can get the user from the #Req
The above controller calls some additional functions, one of which can use the oauth2 client to make any random Google API call
If the random Google API call caused a token refresh, it would run the listener quoted above.
...but then, when #5 happens, is there any way to get the user detected in #3? Perhaps put another way, is there any way to 'inject' more info when the certain signal is emitted (but it's emitted in the Google library, not my code), or is there a way for the listener to pull the user from some kind of context?
(In case it matters, the emitter looks like this)
What I want to do
I'm trying to intercept a third party website's fetch events and modify its request body in a Chrome extension. Modifying the request body is not allowed by the chrome.webRequest.onBeforeRequest event handler. But it looks like regular service workers do have the ability to listen for and modify fetch events and manually respond to the request using my own response, which means I should be able to intercept the request, modify the body, send the modified request to the API, and then respond to the original request with my modified request's response.
The problem
It looks like neither of these event handlers ever get triggered, despite plenty of fetch events being triggered by the website, as I can see in the Network panel.
// background.js
self.onfetch = (event) => console.log(event.request); // never shows
// or
self.addEventListener("fetch", (event) => {
console.log(event.request); // never shows
});
I can verify that the service worker is running by seeing other console.logs appearing in the service worker console, both top-level logs as well as logs triggered by the "install" event
// background.js
console.log(self); // works
self.addEventListener("install", (event) => {
console.log(event); // works
});
Hypothesis
Do the fetch event handlers not get triggered because extension service workers are not allowed access to these for security reasons? That would make sense, I just haven't seen this documented anywhere explicitly so it would be good to know if this is indeed a platform limitation or if I'm doing something wrong.
Alternate solutions?
If this is indeed a limitation of the extensions platform, is there any way other way I can use a Chrome extension to modify request bodies on a third party website?
In an app that I was working, I encountered "headers sent already error" if I test using concurrency and parallel request methods.
ultimately I resolved the problem using !response.headersSent but my question is why am I forced to use it? is node caching similar requests and reuses them for the next repeated call.
if(request.headers.accept == "application/json") {
if(!response.headersSent) {response.writeHead(200, {'Content-Type': 'application/json'})}
response.end(JSON.stringify({result:{authToken:data.authToken}}));
}
Edit
var express = require('express');
var app = express();
var server = app.listen(process.env.PORT || 3000, function () {
console.log('Example app listening at http://%s:%s', server.address().address, server.address().port);
});
Edit 2:
Another problem is while testing using mocha, super agent and while the tests in progress if I just send another request through postman on the side, one of the tests in mocha end with a timeout error. These steps I'm taking to ensure the code is production ready for simultaneous, parallel requests? please advise on what measures I can take to ensure node/code works under stress.
Edit 3:
app.use(function(request, response, next){
request.id = Math.random();
next();
});
OK, in an attempt to capture what solved this for you via all our conversation in comments, I will attempt to summarize here:
The message "headers sent already error" is nearly always caused by improper async handling which causes the code to call methods on the response object in a wrong sequence. The most common case is non-async code that ends the request and then an async operation that ends some time later that then tries to use the request (but there are other ways to misuse it too).
Each request and response object is uniquely created at the time each individual HTTP request arrives at the node/express server. They are not cached or reused.
Because of asynchronous operations in the processing of a request, there may be more than one request/response object in use at any given time. Code that is processing these must not store these objects in any sort of single global variable because multiple ones can be in the state of processing at once. Because node is single threaded, code will only be running on any given request at any given moment, but as soon as that code hits an async operation (and thus has nothing to do until the async operation is done), another request could start running. So multiple requests can easily be "in flight" at the same time.
If you have a system where you need to keep track of multiple requests at once, you can coin a request id and attach it to each new request. One way to do that is with a few lines of express middleware that is early in the middleware stack that just adds a unique id property to each new request.
One simple way of coining a unique id is to just use a monotonically increasing counter.
There are external resources (accessing available inventories through an API) that can only be accessed one thread at a time.
My problems are:
NodeJS server handles requests concurrently, we might have multiple requests at the same time trying to reserve inventories.
If I hit the inventory API concurrently, then it will return duplicate available inventories
Therefore, I need to make sure that I am hitting the inventory API one thread at a time
There is no way for me to change the inventory API (legacy), therefore I must find a way to synchronize my nodejs server.
Note:
There is only one nodejs server, running one process, so I only need to synchronize the requests within that server
Low traffic server running on express.js
I'd use something like the async module's queue and set its concurrency parameter to 1. That way, you can put as many tasks in the queue as you need to run, but they'll only run one at a time.
The queue would look something like:
var inventoryQueue = async.queue(function(task, callback) {
// use the values in "task" to call your inventory API here
// pass your results to "callback" when you're done
}, 1);
Then, to make an inventory API request, you'd do something like:
var inventoryRequestData = { /* data you need to make your request; product id, etc. */ };
inventoryQueue.push(inventoryRequestData, function(err, results) {
// this will be called with your results
});
I have a setup where a node server acts as a proxy server to serve images.
For example an image "test1.jpg", the exact same image can be fetched from 3 external sources - lets say -
a. www.abc.com/test1.jpg
b. www.def.com/test1.jpg
c. www.ghi.com/test1.jpg
When the nodejs server gets a request for "test1.jpg" it first gets a list of external URLs from a DB. Now amongst these external resources, at least one is always behind a CDN and is "expected" to respond faster and hence is a preferred source for the image.
My question is what is the correct method to achieve this out of the two below (or if there is any other method)
Fire http requests (using mikeal's request client module) for all the URLs at the same time. Get their promise objects and whichever source responds first, send that image back to the user (it can be any of the three sources, not necessarily the preferred source behind the cDN - but doesnt matter since the image is exactly the same). The disadvantage that I see is that for every image we hit 3 sources. Also the promises for http requests can still get fulfilled after the response from the first successful source has been sent out.
Fire http requests one at a time starting with the most preferred image, wait for it to fail (i.e. a 404 on the image) and then proceed to the next preferred image. We have lesser number of HTTP requests but more wait time for the user.
Some pseudo code
Method 1
while(imagePreferences.length > 0) {
var url = imagePreferences.splice(0,1);
getImage(url).then(function() {
sendImage();
}, function(err) {
console.log(err);
});
}
Method 2
if(imageUrls.length > 0) {
var url = imageUrls.splice(0,1);
getImage(url).then(function(imageResp) {
sendImageResp();
}, function(err) {
getNextImage(); //recurse over this
});
}
This is just pseudo code. I am new to nodejs. Any help/views would be appreciated.
I prefer the 1st option, CDNs are designed to receive massive requests. Your code is perfectly fine to send HTTP requests to multiple sources in parallel.
In case you want to stop the other requests after successfully receiving the first image, you can use async.detect: https://github.com/caolan/async#detect