Is there a standard pattern for verifying an async request is still needed? - multithreading

In mobile apps apps we can't (or should not) make network requests on the main thread. We normally get the results of the request back via a callback or a closure that is executed on the main thread when the result is available. Since the user may have moved on or the result may no longer be need, for example it may be an old request arriving out of order, we need to check that the action in the callback or closure should actually be executed based on the current state of the app.
In the case of iOS and swift I am planning on using closures so I am thinking of doing something like this for every request I make.
assume I have a method that looks something like this
func makeRequest(identifier: String, handler: (ident: String, result: ResultObject) -> Void) {
...
...
handler(identifier, result)
}
In addition to the handler that will be called when the result is available, I will pass in the value of an identifier, which in turn will be passed to the handler when it is called. The closure will capture a reference to the identifier when the request is created, so it be able to get the value that the reference holds at the time the handler is actually called. So it would look something like this, where ident is the value that commandIdentifier was when the request was made, and commandIdentifier inside the closure will be the value when the closure is actually executed.
commandIdentifer = "some unique identifier"
makeRequest(commandIdentifer) { ident, result in
if commandIdentifier == ident {
// do something
} else {
// do something else
}
}
I don't think there is anything special here, so my question is this:
Is this a general pattern, and if so where can I find any documentation on it?
I am particularly interested if there is some general way of creating the identifier and how to relate its reference in the main thread.
Also if I am total wrong and this not a good approach, I would like to hear that as well

I've used almost exactly that approach before. I use an integer identifier, and increment it when issuing a new request. That way if the pending request is superseded by a new one you can just drop the stale response on the floor.

Related

What is the order of execution of the same-type hooks in fastify?

In Fastify.js you have at least to ways to register hooks: globally (via fastify.addHook()) or as a property inside the route declaration. In the example below I'm trying to use fastfy-multer to handle file uploading but the maximum amount of files must be limited by a setting associated with a "room". As the app has many rooms, most of the requests contain a reference to a room, and every time the request is being augmented with room settings by the preHandler hook.
import fastify from 'fastify'
import multer from 'fastify-multer'
const server = fastify()
server.register(multer.contentParser)
// For all requests containing the room ID, fetch the room options from the database
fastify.addHook('preHandler', async (request, reply) => {
if (request.body.roomID) {
const roomOptions = await getRoomOptions(request.body.roomID)
if (roomOptions) {
reuqest.body.room = roomOptions
}
else {
// handle an error if the room doesn't exist
}
}
})
server.post('/post', {
// Limit the maximum amount of files to be uploaded based on room options
preHandler: upload.array(files, request.body.room.maxFiles)
})
In order for this setup to work, the global hook must always be executed before the file upload hook. How can I guarantee that?
Summary: As #Manuel Spigolon said:
How can I guarantee that? The framework does it
Now we can take Manuel's word for it (SPOILER ALERT: they are absolutely correct), or we can prove how this works by looking in the source code on GitHub.
The first thing to keep in mind is that arrays in JavaScript are remain ordered by the way objects are pushed into them, but don’t take my word for it. That is all explained here if you want to dive a little deeper into the evidence. If that was not true, everything below doesn't matter and you could just stop reading now.
How addHook works
Now that we have established that arrays maintain their order, let look at how the addHook code is executed. We can start by looking at the default export of fastify in the fastify.js file located in the root directory. In this object if scoll down you'll see the addHook property defined. When we look into the addHook function implementation we can see that in that add hook call we are calling this[kHooks].add.
When we go back to see what the kHooks property is we see that it is a new Hooks(). When we go to take a look at the add method on the Hooks object, we can see that it just validates the hook that is being add and then [pushes] it to the array property on the Hooks object with the matching hook name. This shows that hooks will always be in the order which add was called for them.
How fastify.route adds hooks
I hope you're following to this point because that only proves the order of the addHook calls in the respective array on the Hooks object. The next question is how these interact with the calls of fastify.(get | post | route | ...) functions. We can walk through the fastify.get function, but they are all pretty much the same (you can do the same exercise with any of them). Looking at the get function, we see that the implementation is just calling the router.prepareRoute function. When you look into the prepareRoute implementation, you see that this function returns a call to the route function. In the route function there is a section where the hooks are set up. It looks like this:
for (const hook of lifecycleHooks) {
const toSet = this[kHooks][hook]
.concat(opts[hook] || [])
.map(h => h.bind(this))
context[hook] = toSet.length ? toSet : null
}
What this does is go through every lifecycle hook and turn it into a set of all the hooks from the Fastify instance (this) and the hooks in the options (opts[hook]) for that given hook and binds them to the fastify instance (this). This shows that the hooks in the options for the routes are always added after the addHook handlers.
How Fastify executes hooks
This is not everything we need though. Now we know the order in which the hooks are stored. But how exactly are they executed? For that we can look at the hookRunner function in the hooks.js file. We see this function acts as a sort of recursive loop that continues running as long as the handlers do not error. It first creates a variable i to keep track of the handler function it is currently on and then tries to execute it and increments the function tracker (i).
If the handler fails (handleReject), it runs a callback function and does not call the next function to continue. If the handler succeeds (handleResolve), it just runs the next function to try the same process on the following handler (functions[i++]) in the functions set.
Why does this matter
This proves that the hook handlers are called in the order that they were pushed into the ordered collection. In other words:
How can I guarantee that? The framework does it

Global authentication/authorization in Rocket based on a header

I know I can use a Request guard. However, if I have a REST API with hundreds of handlers, not only it would be annoying to have to add an extra function param to all of them, but it kinda scares me that it could be easy to miss adding such a param here or there and therefore create a security hole. That's why I'd like to know if there is a way to do such a validation globally.
The documentation on Fairings mentions they can be used for global security policies:
As a general rule of thumb, only globally applicable actions should be implemented via fairings. For instance, you should not use a fairing to implement authentication or authorization (preferring to use a request guard instead) unless the authentication or authorization applies to the entire application. On the other hand, you should use a fairing to record timing and/or usage statistics or to implement global security policies.
But at the same time the docs on the on_request() callback say this:
A request callback can modify the request at will and Data::peek() into the incoming data. It may not, however, abort or respond directly to the request; these issues are better handled via request guards or via response callbacks.
So how am I supposed to return an error to the user in the case of an invalid token for example?
OK, I think I found a way...
First we create a "dummy" handler like this:
#[put("/errHnd", format = "json")]
fn err_handler() -> ApiResult {
// Here simply return an error
}
Then we attach a fairing like this:
rocket::custom(cfg)
.attach(AdHoc::on_request("OnReq", |req, _| {
// Here we validate the token and if it's not OK,
// forward the request to our "dummy" handler:
let u = Origin::parse("/errHnd").unwrap();
req.set_uri(u);
req.set_method(Method::Put);
}))
.mount("/", routes![err_handler, ...
I'm not sure that's the best way to do it, but I tested it and it seems to work. I'm open to other suggestions.
P.S. It may also be worth mentioning that if we wanted to have an exception, so as to skip the validation in the fairing, say, based on the URL, we could simply add something like this in it:
if req.uri().path() == "/let-me-in-please" {
return;
}

i can't see difference between put and patch method

I just want to like a quote or dislike if already i liked the quote. So first i find the quote and then i check if i already liked the quote, if not then i like, otherwise i dislike.
I have a router like below
router.put('/:quoteId', isAuth, quotesController.likeQuote);
And likeQuote method is like below
module.exports.likeQuote = (req, res, next) => {
const quoteId = req.params.quoteId;
const userId = req.userId;
Quote.findById(quoteId)
.then((quote) => {
if (quote.likes.indexOf(userId) == -1) {
quote.likes.push(userId);
} else {
quote.likes.pull(userId);
}
return quote.save();
})
.then((updatedQuote) => {
res.status(201).json({ message: 'You liked the post!' });
})
.catch((err) => {
err.statusCode = 500;
next(err);
});
But my question is, i just want to know how PUT and PATCH works? I think we should send all the fields in PUT but not in PATCH methods, but in my case i don't even send any fields and both work just fine.How this happens?
The actual REST API methods (PUT, PATCH, ... ) do not have any limitations. the logic you choose to write is what defines this. Now you're asking about "best practices" and whenever you ask about that you will get many different answers. I'll explain my view.
PUT, so the essence of PUT is to replace the existing object completely, that's why people are telling you to send the entire object because when you use PUT what's expected is a complete swap.
PATCH, the essence of PATCH is to update the existing resource. which is in your case what you're looking for, in this case you just send the required fields you need for the update.
Now is it wrong if you write PUT to be an update and not a complete swap? I would argue it is not. As long as you keep consistent logic throughout your app you can build your own "best practices" that will suit your needs.
Now you did tag this question as Mongo related so I would like to introduce to you the concept of piplined updates (for Mongo v4.2+) where you can execute your logic in 1 single update.
Mongo Playground
i just want to know how PUT and PATCH works?
An important distinction to understand is that we don't have a standard for how PUT and PATCH work; that's a implementation detail, and is deliberately hidden behind the "uniform interface".
What we do have is standardized semantics, an agreement about what PUT and PATCH mean.
(This is further complicated by people not being familiar with the standard, and therefore misinterpretations of the meaning are common.)
If the implementation of the request handler doesn't match the semantics of the request, that's OK... but if something goes expensively wrong as a result, it's the fault of the implementation.
PUT and PATCH are both method-tokens that indicate that we are trying to modify the resource identified by the target-uri. In particular, we use those method-tokens when we are trying to make the server's representation of the resource match the representation on the client.
For example, imagine editing a web page. We GET /home.html, change the TITLE element in our copy, and we want to save our changes to the server. How do we do that in HTTP?
One answer is that we send a copy of home.html (with our changes) back to the server, so that the server can save it. That's PUT.
Another answer is that we diff our copy and the server's copy, and send to the server a patch-document that describes the changes that the server should make to it's copy. That's PATCH.
router.put('/:quoteId', isAuth, quotesController.likeQuote);
What this invocation is doing is configuring the framework, so that requests with the PUT method token and a target-uri that matches "/:quoteId" are delegated to the likeQuote method.
And at this level, the framework assumes that you know what you are doing - there's no attempt to verify that "likeQuote" implements PUT semantics. To ensure that the implementation and the request semantics match, you are going to need to do some work (inspect the code, test, etc).
in my case i don't even send any fields and both work just fine.
Right - because the framework assumes that you know what you are doing, and your current implementation doesn't try to access or interpret the body of the HTTP request.
Note: that's a big hint that the request handler not actually implementing PUT/PATCH semantics (how could the server possibly make its copy of the quote look like the client's if it doesn't look at the information the client provided)?
It is okay to use POST; assuming that your implementation is correct, you should not be using methods with remote authoring semantics, because that's not what you are doing. This same implementation attached to a POST route would be fine.
As is, things are broken - you have a mismatch between the request semantics and the handler implementation. Under controlled conditions, you are likely to get away with it. It's entirely possible that you are only going to be invoking this code under controlled conditions.

Node js - overall structure of a program

Hope you are well.
I need your help to understand how to logically organize a program in Node JS to avoid repetition of code given its asynchronous property (as a beginner ..). Let's take an example to make it easier to explain.
One has some data in a mongo database (let's say a list of name). This list of name can be access thanks to the function readData as below
function readData(criteriaRead,callback) {
mongodb.stuff(..)
callback('data read on mongodb')
}
I have two actions in my program: one is to print out the list of name, the other is to check if a name is in the list.
For the first case, it's simple, I just need to have a function like this
function printout(data) {console.log(data)}
and to do this
readData(criteriaRead,printout)
In the second case, let's say I have a function like this
checkIfInIt(array,dataToCheck) {//stuff to check console.log(results)}
Now, I have an issue because if I doreadData(criteriaRead,checkIfInIt) it won't work as checkIfInIt requires two parameters.
I would need a function like this
function readDataBis(criteriaRead,dataToCheck,callback) {
mongodb.stuff(..)
callback('data read on Mongodb','dataToCheck')
}
and then readDataBis(criteriaRead,dataToCheck,checkIfInIt) would work but I have a huge repetition in my code.
How to avoid that?
There are several solutions for this type of issue, but here's an easy one for your case
Declare your function with the three parameters as such
function readData(callback, criteriaRead, dataToCheck) { ...
Inside, check if dataToCheck is undefined, and continue with the flow of the second function you had if that's the case. (Otherwise just do the read function)
Call them like so
readData(callback, criteriaRead); // Third parameter missing, will be undefined
readData(callback, criteriaRead, dataToCheck);
You could also pass in an object for your parameters like this, if it would make it simpler
function readData(callback, params) { ...
And call like this
readData(callback, { criteriaRead: criteriaRead, dataToCheck: dataToCheck });

javascript variable value in asynchronous function called simultaneously by multiple clients

I am putting up a web server in node.js,
in particular I am developing a module for orders management.
the module is wrapped inside an anonymous function
(function(){})();
if the "insertOrder" function I declare the variable order like this:
var order = {
user_id: '',
address_id: '',
payed: false,
accepted: false,
shipped: false
};
Then it gets populated with the values "returned" from the asynchronous functions i am calling that interact with the database.
This application is going to be used simultaneously by multiple clients.
Now, assuming that two users want to make an order, is the variable going to be re-initialized to the starting object every time the function get's called, overwriting the changes made during the first execution? Or is a context going to be spawned every time a client makes a call to the server?
I know this is not the case for node.js but still can't figure this one
out.
I.E.
is the variable value of the previous iteration gonna be kept somehow and used until the end of the first function call or lost as soon as the function gets called again?
Thank you very much.
EDIT: further explaination of the problem.
The user_id is is going to be used to retrieve the address that the order is going to be shipped to. A wrong user_id is going to result in the item shipped to the wrong address
If var order = { ... } is inside the insertOrder function, then every time the insertOrder function is called order will be reinitialized. The scope is isolated, so there should not be any mingling of local variables even in an asynchronous situation.
jsFiddle

Resources