How to write Rust Tower middleware for authentication in a REST API - rust

TL;DR: I need an example from someone who knows the Rust/Swagger generated server code on how to write a middleware layer for authentication.
I've been working on a REST API built from an OpenApi spec (Swagger-file) with
Rust, and I want to implement some authentication with a middleware layer.
I have spec file that lists a number of services, that are all secured with an
API key. Whenever a service is called, I would like to peek at the HTTP request,
before running the actual service.
I want to validate the API key, and possibly do some back-channel server calls.
If the API key checks out, I want to run the actual service code, but if it doesn't,
I want to return a custom HTTP response.
The Rust generator uses a Hyper server, and generates some middleware in the form
of Tower services. I thought it would be a picnic to replace the "AllowAll"
middleware with one of my own making, but so far, I have had no success.
I think I understand how Tower services are supposed to work, but the ones generated
by the Swagger generator are a little more complex, to say the least, and they take
the form of a two step process, where one middleware generates another service and
wraps that, which then in turn wraps the preciously added middleware.
Long story short, I need some help from more experienced Rustaceans :)
For the sake of simplicity, let's say I have a spec, middleware.yaml like this:
openapi: 3.0.3
info:
title: Middleware Test
version: 0.0.1
paths:
/test:
get:
operationId: test
responses:
"200":
description: Ok
security:
- api_key: []
components:
securitySchemes:
api_key:
type: apiKey
name: api_key
in: header
I generate a server project using the openapi-generator-cli tool:
npx --yes #openapitools/openapi-generator-cli generate \
--generate-alias-as-model \
--api-package middleware \
--package-name middleware \
-g rust-server \
-i middleware.yaml
This generates a library crate that I can then use in my own code. For the sake
of simplicity, we can look at the example server, generated by the tool
(in examples/server/server.rs). The interesting part is the function that
creates the Hyper server:
pub async fn create(addr: &str) {
let addr = addr.parse().expect("Failed to parse bind address");
let server = Server::new();
let service = middleware::server::MakeService::new(server);
let service = swagger::auth::MakeAllowAllAuthenticator::new(service, "cosmo");
let service = middleware::server::context::MakeAddContext::<_, EmptyContext>::new(service);
hyper::server::Server::bind(&addr).serve(service).await.unwrap()
}
It creates an implementation of the API (Server::new()), and wraps it in three services.
My problem is, that the combination of two-tiered wrappers (i.e. MakeAllowAllAuthenticator
is a wrapper for an internal AllowAllAuthenticator which in turns wraps the previous
MakeService that wraps a Service) and the Rust-Swagger concept of the context which is
not a plain struct but a set of trait bounds that hides a seemingly dynamic implementation.
This creates a very complex set of boundary requirements.
So, I'm hoping I'm not the only one using the Rust/Swagger generator, and that someone can
point me at an example of a middleware implementation, in the expected two service format
(MakeFoo/Foo) that inspects the raw HTTP body, and possibly returns an early result,
instead of proceeding with the chain.
Like I said, the short term purpose is to plug in my authentication code, but in the long term
I would like to clean up my code by moving some things I do in every service method into the
context that is built by the create method.

Related

Node.js express app architecture with testing

Creating new project with auto-testing feature.
It uses basic express.
The question is how to orginize the code in order to be able to test it properly. (with mocha)
Almost every controller needs to have access to the database in order to fetch some data to proceed. But while testing - reaching the actual database is unwanted.
There are two ways as I see:
Stubbing a function, which intends to read/write from/to database.
Building two separate controller builders, one of each will be used to reach it from the endpoints, another one from tests.
just like that:
let myController = new TargetController(AuthService, DatabaseService...);
myController.targetMethod()
let myTestController = new TargetController(FakeAuthService, FakeDatabaseService...);
myTestController.targetMethod() // This method will use fake services which doesnt have any remote connection functionality
Every property passed will be set to a private variable inside the constructor of the controller. And by aiming to this private variable we could not care about what type of call it is. Test or Production one.
Is that a good approach of should it be remade?
Alright, It's considered to be a good practice as it is actually a dependency injection pattern

How to handle provider-like objects in Actix-Web

I have a validate endpoint, which takes in a JWT that's POSTed to it, and that works fine. Currently set up like this in the application setup:
let server = HttpServer::new(|| {
App::new()
.wrap(Logger::default())
.route("/ping", web::get().to(health_check))
.route("/validate", web::post().to(validate))
})
I'm now looking to provide some JWKs, which I've done via a provider-style setup, where the calling code can just call get_key and the provider should handle caching and refreshing that cache automatically every X minutes so that I don't have to call the endpoint that provides the JWKs on each request. However, obviously that will only work if I can maintain the same instance of the provider object.
What would be the best way of doing this? Could I create an instance of the provider at the same level as the server creation code and pass in the results of provider.get_key through the app_data method that actix provides? Or perhaps do the same via middleware somehow?
I've tried passing the entire provider instance through the app_data method but can't get this to work (I think because my struct can't implement Copy due to containing a Vec), so I'm trying to find alternate methods of doing it!

Node typescript library environment specific configuration

I am new to node and typescript. I am working on developing a node library that reaches out to another rest API to get and post data. This library is consumed by a/any UI application to send and receive data from the API service. Now my question is, how do I maintain environment specific configuration within the library? Like for ex:
Consumer calls GET /user
user end point on the consumer side calls a method in the library to get data
But if the consumer is calling the user end point in test environment I want the library to hit the following API Url
for test http://api.test.userinformation.company.com/user
for beta http://api.beta.userinformation.company.com/user
As far as I understand the library is just a reference and is running within the consumer application. Library can for sure get the environment from the consumer, but I do not want the consumer having to specify the full URL that needs to be hit, since that would be the responsibility of the library to figure out.
Note: URL is not the only problem, I can solve that with environment switch within the library, I have some client secrets based on environments which I can neither store in the code nor checkin to source control.
Additional Information
(as per jfriend00's request in comments)
My library has a LibExecutionEngine class and one method in it, which is the entry point of the library:
export class LibExecutionEngine implements ExecutionEngine {
constructor(private environment: Environments, private trailLoader:
TrailLoader) {}
async GetUserInfo(
userId: string,
userGroupVersion: string
): Promise<UserInfo> {
return this.userLoader.loadUserInfo(userId, userGroupVersion)
}
}
export interface ExecutionEngine {
GetUserInfo(userId: string, userGroupVersion: string): Promise<UserInfo>
}
The consumer starts to use the library by creating an instance of the LibraryExecution then calling the getuserinfo for example. As you see the constructor for the class accepts an environment. Once I have the environment in the library, I need to somehow load the values for keys API Url, APIClientId and APIClientSecret from within the constructor. I know of two ways to do this:
Option 1
I could do something like this._configLoader.SetConfigVariables(environment) where configLoader.ts is a class that loads the specific configuration values from files({environment}.json), but this would mean I maintain the above mentioned URL variables and the respective clientid, clientsecret to be able to hit the URL in a json file, which I should not be checking in to source control.
Option 2
I could use dotenv npm package, and create one .env file where I define the three keys, and then the values are stored in the deployment configuration which works perfectly for an independently deployable application, but this is a library and doesn't run by itself in any environment.
Option 3
Accept a configuration object from the consumer, which means that the consumer of the library provides the URL, clientId, and clientSecret based on the environment for the library to access, but why should the responsibility of maintaining the necessary variables for library be put on the consumer?
Please suggest on how best to implement this.
So, I think I got some clarity. Lets call my Library L, and consuming app C1 and the API that the library makes a call out to get user info as A. All are internal applications in our org and have a OAuth setup to be able to communicate, our infosec team provides those clientids and secrets to individual applications, so I think my clarity here is: C1 would request their own clientid and clientsecret to hit A's URL, C1 would then pass in the three config values to the library, which the library uses to communicate with A. Same applies for some C2 in the future.
Which would mean that L somehow needs to accept a full configuration object with all required config values from its consumers C1, C2 etc.
Yes, that sounds like the proper approach. The library is just some code doing what it's told. It's the client in this case that had to fetch the clientid and clientsecret from the infosec team and maintain them and keep them safe and the client also has the URL that goes with them. So, the client passes all this into your library, ideally just once per instance and you then keep it in your instance data for the duration of that instance

Independent NPM library that validates request based on swagger file

We are building APIs using Swagger, AWS API gateway and Lambda functions with NodeJS. The API gateway will do the request validation, however as per the design, the lambda functions need to re-validate the request object as an API Gateway Proxy Request Event. This makes sense as in theory we can reuse the lambda functions by invoking them via other event source (e.g. SNS).
Therefore we need an NodeJS tool which can validate the request (not only body but also params, etc) based on the swagger spec - exactly what the swagger-tools and a few other tools (e.g. swagger-request-validator) are doing, but not as a middleware.
I did some search but could not find one, also looked into swagger-tools source code, reckon its validation component was written in the way that cannot be easily used separately.
Any suggestion is welcome. Thanks in advance.
You can use swagger-model-validator.
var Validator = require('swagger-model-validator');
var swaggerFile = require("./swagger.json");
const validator = new Validator(swaggerFile);
console.log(validator.validate({
name: 'meg'
}, swaggerFile.definitions.Pet, swaggerFile.definitions, true).GetErrorMessages())
This outputs:
[ 'photoUrls is a required field' ]
validator.validate returns an object, so you can also check if the returned object contains anything under the errors attribute. It should be as simple as
if (validator.validate({
name: 'meg'
}, swaggerFile.definitions.Pet, swaggerFile.definitions, true).errors) {
// do something with error
}
I have used Swagger's sample JSON for this answer.

sails: disable `blueprints actions` in production, since it creates a huge security footprint?

Getting acquinted with Sails for Node.
One thing I need to get used to is the 'automagic' way in which routes for controller-methods are set-up using blueprints.
For example, from the docs, if actions-blueprints are enabled (which they are by default) GET, POST, PUT, and DELETE routes will be generated for every one of a controller's actions.
E.g from the docs, when you've got controlled-method EmailController.send the following routes are created:
* `EmailController.send`
* :::::::::::::::::::::::::::::::::::::::::::::::::::::::
* `GET /email/send/:id?`
* `POST /email/send/:id?`
* `PUT /email/send/:id?`
* `DELETE /email/send/:id?`
The docs specifically state: actions are enabled by default, and are OK for production-- however, you must take great care not to inadvertently expose unsafe controller logic to GET requests.
Normally I would write a controller-method for ONE specific HTTP Verb (e.g.: POST). That's clearly not compatible with this automagic wiring, since these methods would be exposed on GETs (and PUTs and DELETEs) as well, which would leave a huge security footprint imho.
So: what's the practical use of enabling these actions? To me, it seems like a huge security risk. On the other hand, I can (theoretically) imagine writing all controller methods with conditional logic to discriminate between HTTP VERBS , but for most controller methods this just doesn't make sense.
So help me out: What's the advantage of working with these actions which Sails seems to try to nudge me towards? Or is it just a way to get going quickly, but really not meant for production?
Thanks for wrapping my head around this.
Action Blueprints automatically create routes to all the available controller methods. I personally turn them off, and do my routing manually.
Restful blueprints automatically generate the controller methods themselves. Which would then have routes to them created by the Action Blueprints. I believe these are the rest defaults....
* GET /boat/:id? -> BoatController.find
* POST /boat -> BoatController.create
* PUT /boat/:id -> BoatController.update
* DELETE /boat/:id -> BoatController.destroy

Resources