Global authentication/authorization in Rocket based on a header - rust

I know I can use a Request guard. However, if I have a REST API with hundreds of handlers, not only it would be annoying to have to add an extra function param to all of them, but it kinda scares me that it could be easy to miss adding such a param here or there and therefore create a security hole. That's why I'd like to know if there is a way to do such a validation globally.
The documentation on Fairings mentions they can be used for global security policies:
As a general rule of thumb, only globally applicable actions should be implemented via fairings. For instance, you should not use a fairing to implement authentication or authorization (preferring to use a request guard instead) unless the authentication or authorization applies to the entire application. On the other hand, you should use a fairing to record timing and/or usage statistics or to implement global security policies.
But at the same time the docs on the on_request() callback say this:
A request callback can modify the request at will and Data::peek() into the incoming data. It may not, however, abort or respond directly to the request; these issues are better handled via request guards or via response callbacks.
So how am I supposed to return an error to the user in the case of an invalid token for example?

OK, I think I found a way...
First we create a "dummy" handler like this:
#[put("/errHnd", format = "json")]
fn err_handler() -> ApiResult {
// Here simply return an error
}
Then we attach a fairing like this:
rocket::custom(cfg)
.attach(AdHoc::on_request("OnReq", |req, _| {
// Here we validate the token and if it's not OK,
// forward the request to our "dummy" handler:
let u = Origin::parse("/errHnd").unwrap();
req.set_uri(u);
req.set_method(Method::Put);
}))
.mount("/", routes![err_handler, ...
I'm not sure that's the best way to do it, but I tested it and it seems to work. I'm open to other suggestions.
P.S. It may also be worth mentioning that if we wanted to have an exception, so as to skip the validation in the fairing, say, based on the URL, we could simply add something like this in it:
if req.uri().path() == "/let-me-in-please" {
return;
}

Related

i can't see difference between put and patch method

I just want to like a quote or dislike if already i liked the quote. So first i find the quote and then i check if i already liked the quote, if not then i like, otherwise i dislike.
I have a router like below
router.put('/:quoteId', isAuth, quotesController.likeQuote);
And likeQuote method is like below
module.exports.likeQuote = (req, res, next) => {
const quoteId = req.params.quoteId;
const userId = req.userId;
Quote.findById(quoteId)
.then((quote) => {
if (quote.likes.indexOf(userId) == -1) {
quote.likes.push(userId);
} else {
quote.likes.pull(userId);
}
return quote.save();
})
.then((updatedQuote) => {
res.status(201).json({ message: 'You liked the post!' });
})
.catch((err) => {
err.statusCode = 500;
next(err);
});
But my question is, i just want to know how PUT and PATCH works? I think we should send all the fields in PUT but not in PATCH methods, but in my case i don't even send any fields and both work just fine.How this happens?
The actual REST API methods (PUT, PATCH, ... ) do not have any limitations. the logic you choose to write is what defines this. Now you're asking about "best practices" and whenever you ask about that you will get many different answers. I'll explain my view.
PUT, so the essence of PUT is to replace the existing object completely, that's why people are telling you to send the entire object because when you use PUT what's expected is a complete swap.
PATCH, the essence of PATCH is to update the existing resource. which is in your case what you're looking for, in this case you just send the required fields you need for the update.
Now is it wrong if you write PUT to be an update and not a complete swap? I would argue it is not. As long as you keep consistent logic throughout your app you can build your own "best practices" that will suit your needs.
Now you did tag this question as Mongo related so I would like to introduce to you the concept of piplined updates (for Mongo v4.2+) where you can execute your logic in 1 single update.
Mongo Playground
i just want to know how PUT and PATCH works?
An important distinction to understand is that we don't have a standard for how PUT and PATCH work; that's a implementation detail, and is deliberately hidden behind the "uniform interface".
What we do have is standardized semantics, an agreement about what PUT and PATCH mean.
(This is further complicated by people not being familiar with the standard, and therefore misinterpretations of the meaning are common.)
If the implementation of the request handler doesn't match the semantics of the request, that's OK... but if something goes expensively wrong as a result, it's the fault of the implementation.
PUT and PATCH are both method-tokens that indicate that we are trying to modify the resource identified by the target-uri. In particular, we use those method-tokens when we are trying to make the server's representation of the resource match the representation on the client.
For example, imagine editing a web page. We GET /home.html, change the TITLE element in our copy, and we want to save our changes to the server. How do we do that in HTTP?
One answer is that we send a copy of home.html (with our changes) back to the server, so that the server can save it. That's PUT.
Another answer is that we diff our copy and the server's copy, and send to the server a patch-document that describes the changes that the server should make to it's copy. That's PATCH.
router.put('/:quoteId', isAuth, quotesController.likeQuote);
What this invocation is doing is configuring the framework, so that requests with the PUT method token and a target-uri that matches "/:quoteId" are delegated to the likeQuote method.
And at this level, the framework assumes that you know what you are doing - there's no attempt to verify that "likeQuote" implements PUT semantics. To ensure that the implementation and the request semantics match, you are going to need to do some work (inspect the code, test, etc).
in my case i don't even send any fields and both work just fine.
Right - because the framework assumes that you know what you are doing, and your current implementation doesn't try to access or interpret the body of the HTTP request.
Note: that's a big hint that the request handler not actually implementing PUT/PATCH semantics (how could the server possibly make its copy of the quote look like the client's if it doesn't look at the information the client provided)?
It is okay to use POST; assuming that your implementation is correct, you should not be using methods with remote authoring semantics, because that's not what you are doing. This same implementation attached to a POST route would be fine.
As is, things are broken - you have a mismatch between the request semantics and the handler implementation. Under controlled conditions, you are likely to get away with it. It's entirely possible that you are only going to be invoking this code under controlled conditions.

Using ZMQ_XPUB_MANUAL with zeromq.js

I am trying to implement a pub/sub broker with ZeroMQ where it is possible to restrict clients from subscribing to prefixes they are not allowed to subscribe to. I found a tutorial that tries to achieve a similar thing using the ZMQ_XPUB_MANUAL option. With zeromq.js it is possible to set this option:
import * as zmq from "zeromq";
// ...
const socket = new zmq.XPublisher({ manual: true });
After setting this option I am able to receive the subscription messages by calling .receive() on this socket:
const [msg] = await socket.receive();
But I have no Idea how to accept this subscription. Usally this is done by calling setSockOpt with ZMQ_SUBSCRIBE but I don't know how to do this with zeromq.js.
Is there a way to call setSockOpt with zeromq.js or is there another way to accept a subscription?
Edit
I tried user3666197's suggestion to call setSockOpt directly, but I am not sure how to do this. Rather than doing that, I took another look in the sources and found this: https://github.com/zeromq/zeromq.js/blob/master/src/native.ts#L617
It seems like setSockOpt is exposed to the TypeScript side as protected methods of the Socket class. To try this out, I created my own class that inherits XPublisher and exposed an acceptSubscription message:
class CustomPublisher extends zmq.XPublisher {
constructor(options?: zmq.SocketOptions<zmq.XPublisher>) {
super(options);
}
public acceptSubscription(subscription: string | null): void {
// ZMQ_SUBSCRIBE has a value of 6
// reference:
// https://github.com/zeromq/libzmq/blob/master/include/zmq.h#L310
this.setStringOption(6, subscription);
}
}
This works like a charm! But do not forget to strip the first byte of the subscription messages, otherwise your client won't receive any messages since the prefix won't match.
Q : "Is there a way to call setSockOpt() with zeromq.js or is there another way to accept a subscription?"
So, let me first mention Somdoron to be, out of doubts & for ages, a master of the ZeroMQ tooling.
Next comes the issue. The GitHub-sources, I was able to review atm, seem to me, that permit the ZMQ_XPUB-Socket-archetypes to process the native API ZMQ_XPUB_MANUAL settings ( re-dressed into manual-property, an idiomatic shift ), yet present no method (so far visible for me) to actually permit user to meet the native API explicit protocol of:
ZMQ_XPUB_MANUAL: change the subscription handling to manual...with manual mode subscription requests are not added to the subscription list. To add subscription the user need to call setsockopt() with ZMQ_SUBSCRIBE on XPUB socket./__ from ZeroMQ native API v.4.3.2 documentation __/
Trying to blind-call the Socket-inherited .SetSockOpt() method may prove me wrong, yet if successful, it may be a way to inject the { ZMQ_SUBSCRIBE | ZMQ_UNSUBSCRIBE } subscription-management steps into the XPUB-instance currently having been switched into the ZMQ_XPUB_MANUAL-mode.
Please test it, and if it fails to work via this super-class inherited method, the shortest remedy would be to claim that collision/conceptual-shortcomings directly to the zeromq.js maintainers ( it might be a W.I.P. item, deeper in their actual v6+ refactoring backlog, so my fingers are crossed for either case ).

NestJS: Controller function with #UploadedFile or String as a parameter

I am using NestJS (version 6.5, with Express platform) and I need to handle a request with a property that can either be a File or a String.
Here is the code I currently have, but I don't find a clean way to implement this.
MyAwesomeController
#Post()
#UseInterceptors(FileInterceptor('source'))
async handle(#UploadedFile() source, #Body() myDto: MyDto): Promise<any> {
//do things...
}
Am I missing something obvious or am I supposed to write my own interceptor to handle this case?
Design-wise, is this bad?
Based on the fact you're designing a REST API:
It depends what use case(s) you want to achieve: is your - client-side - flow designed to be performed in 2 steps o not ?
Can string and file params be both passed at the same time or is there only one of the two on each call ? (like if you want to update a file and its name, or some other non Multer related attributes).
When you pass a string as parameter to your endpoint call, is a file resource created / updated / deleted ? Or maybe not at all ?
Depending on the answer and the flow that you thought of, you should split both cases handling within two independent endpoints, or maybe it makes sense to handle both parameters at the same time.
If only one of the params can be passed at a time, I'd say go for two independent endpoints; you'll benefit from both maintenance and code readability.
If both params can be passed at the same time and they're related to the same resource, then it could make sense to handle both of them at once.
Hope this helps, don't hesitate to comment ;)

Is there a standard pattern for verifying an async request is still needed?

In mobile apps apps we can't (or should not) make network requests on the main thread. We normally get the results of the request back via a callback or a closure that is executed on the main thread when the result is available. Since the user may have moved on or the result may no longer be need, for example it may be an old request arriving out of order, we need to check that the action in the callback or closure should actually be executed based on the current state of the app.
In the case of iOS and swift I am planning on using closures so I am thinking of doing something like this for every request I make.
assume I have a method that looks something like this
func makeRequest(identifier: String, handler: (ident: String, result: ResultObject) -> Void) {
...
...
handler(identifier, result)
}
In addition to the handler that will be called when the result is available, I will pass in the value of an identifier, which in turn will be passed to the handler when it is called. The closure will capture a reference to the identifier when the request is created, so it be able to get the value that the reference holds at the time the handler is actually called. So it would look something like this, where ident is the value that commandIdentifier was when the request was made, and commandIdentifier inside the closure will be the value when the closure is actually executed.
commandIdentifer = "some unique identifier"
makeRequest(commandIdentifer) { ident, result in
if commandIdentifier == ident {
// do something
} else {
// do something else
}
}
I don't think there is anything special here, so my question is this:
Is this a general pattern, and if so where can I find any documentation on it?
I am particularly interested if there is some general way of creating the identifier and how to relate its reference in the main thread.
Also if I am total wrong and this not a good approach, I would like to hear that as well
I've used almost exactly that approach before. I use an integer identifier, and increment it when issuing a new request. That way if the pending request is superseded by a new one you can just drop the stale response on the floor.

sails.js Use session param in model

This is an extension of this question.
In my models, every one requires a companyId to be set on creation and every one requires models to be filtered by the same session held companyid.
With sails.js, I have read and understand that session is not available in the model unless I inject it using the controller, however this would require me to code all my controller/actions with something very, very repetitive. Unfortunate.
I like sails.js and want to make the switch, but can anyone describe to me a better way? I'm hoping I have just missed something.
So, if I understand you correctly, you want to avoid lots of code like this in your controllers:
SomeModel.create({companyId: req.session.companyId, ...})
SomeModel.find({companyId: req.session.companyId, ...})
Fair enough. Maybe you're concerned that companyId will be renamed in the future, or need to be further processed. The simplest solution if you're using custom controller actions would be to make class methods for your models that accept the request as an argument:
SomeModel.doCreate(req, ...);
SomeModel.doFind(req, ...);
On the other hand, if you're on v0.10.x and you can use blueprints for some CRUD actions, you will benefit from the ability to override the blueprints with your own code, so that all of your creates and finds automatically use the companyId from the session.
If you're coming from a non-Node background, this might all induce some head-scratching. "Why can't you just make the session available everywhere?" you might ask. "LIKE THEY DO IN PHP!"
The reason is that PHP is stateless--every request that comes in gets essentially a fresh copy of the app, with nothing in memory being shared between requests. This means that any global variables will be valid for the life of a single request only. That wonderful $_SESSION hash is yours and yours alone, and once the request is processed, it disappears.
Contrast this with Node apps, which essentially run in a single process. Any global variables you set would be shared between every request that comes in, and since requests are handled asynchronously, there's no guarantee that one request will finish before another starts. So a scenario like this could easily occur:
Request A comes in.
Sails acquires the session for Request A and stores it in the global $_SESSION object.
Request A calls SomeModel.find(), which calls out to a database asynchronously
While the database does its magic, Request A surrenders its control of the Node thread
Request B comes in.
Sails acquires the session for Request B and stores it in the global $_SESSION object.
Request B surrenders its control of the thread to do some other asynchronous call.
Request A comes back with the result of its database call, and reads something from the $_SESSION object.
You can see the issue here--Request A now has the wrong session data. This is the reason why the session object lives inside the request object, and why it needs to be passed around to any code that wants to use it. Trying too hard to circumvent this will inevitably lead to trouble.
Best option I can think of is to take advantage of JS, and make some globally accessible functions.
But its gonna have a code smell :(
I prefer to make a policy that add the companyId inside the body.param like this:
// Needs to be Logged
module.exports = function(req, res, next) {
sails.log.verbose('[Policy.insertCompanyId() called] ' + __filename);
if (req.session) {
req.body.user = req.session.companyId;
//or something like AuthService.getCompanyId(req.session);
return next();
}
var err = 'Missing companyId';
//log ...
return res.redirect(307, '/');
};

Resources