I would like to secure a loopback based app using SAML2.0 and OneLogin. I believe I should use the loopback-component-passport and passport-saml modules in order to achieve my goal. However I'm really struggling to find any good documentation that could help me to implement my use case. Seems like the provided sample is outdated and not so accurate. Would you have any useful pointers or advice that'd help me to get started.
Thanks
SAML authentication in Loopback is poorly documented, but supported. Reading the source code of passport-configurator tells us that the following configuration of providers.json will work:
"saml": {
"name": "saml",
"authScheme" : "saml",
"module": "passport-saml",
"callbackURL": "",
"entryPoint": "",
"issuer": "",
"audience": "",
"certPath": "",
"privateCertPath": "",
"decryptionPvkPath": "",
...
}
Here the ellipsis indicates any additional options from the passport-saml provider. Note that no special processing is performed on these options; so, for instance, you will need to pass certPath, privateCertPath, etc. as strings rather than paths to files.
See how passport is configured using these properties here.
So, I don't think there is a clear explanation in Loopback's docs about this, so what I would do is try to figure out how to configure the SAML provider in the prviders.json correctly in order to generate the right passport auth strategy (In your case, you should follow the passport-saml docs to figure out the exact parameters you need to pass).
Loopback is using the loopback-component-passport module to read the provider and create the Passport strategies. You can dig into this file to figure out how exactly they are doing it.
Related
We have an angular application, which calls multiple API's. Currently we are just interested in injecting b2c access token to only two API's and want to avoid for other API's.
Our API's are as follows
https://testdomain.com/onprem/proxy/handler/api/account/someendpoint
https://testdomain.com/onprem/proxy/handler/something/api/account/someendpoint
https://testdomain.com/onprem/something/api/account/endpoint
https://testdomain.com/cloud/api/app1/something/endpoint
Since the structure of the API calls are not similar. we have our current implementation as below
export const protectedResourceMap: [string, string[]][] = [
['/cloud/api/app1/account/gettestaccount',['scope1']],
['/cloud/api/app2/account/getanotheraccount',['scope2']],
['/onprem/*/*/*/*/*', null],
['/onprem/*/*/*/*/*/*', null],
['/onprem/*/*/*/*', null]
]
So MSAL Interceptor is matching the req.url with protectedresourcemap using minimatch.
In the above example I specified 3 different patterns with scopes:null . So getScopesForEndpoint() will be called 5 times, even though out of those 5 calls, 3 are unwanted.
Could some please suggest a better approach to add url's to protectedResourceMap will scopes: null, so that I can reduce the calls to getScopesForEndpoint() and improve the performance of front-end app.
Thanks in advance.
getScopesForEndpoint will always be called, even before the change to minimatch. We can look at improving getScopesForEndpoint to not call minimatch if null is set, if there are concerns with performance of that in this scenario.
I have a working ordinary Hapi application that I'm planning to migrate to Swagger. I installed swagger-node using the official instructions, and chose Hapi when executing 'swagger project create'. However, I'm now confused because there seem to be several libraries for integrating swagger-node and hapi:
hapi-swagger: the most popular one
hapi-swaggered: somewhat popular
swagger-hapi: unpopular and not that active but used by the official Swagger Node.js library (i.e. swagger-node) as default for Hapi projects
I though swagger-hapi was the "official" approach, until I tried to find information on how do various configurations on Hapi routes (e.g. authorization, scoping, etc.). It seems also that the approaches are fundamentally different, swagger-hapi taking Swagger definition as input and generating the routes automatically, whereas hapi-swagger and hapi-swaggered seem to have similar approach with each other by only generating Swagger API documentation from plain old Hapi route definitions.
Considering the amount of contributors and the number of downloads, hapi-swagger seems to be the way to go, but I'm unsure on how to proceed. Is there an "official" Swagger way to set up Hapi, and if there is, how do I set up authentication (preferably by using hapi-auth-jwt2, or other similar JWT solution) and authorization?
EDIT: I also found swaggerize-hapi, which seems to be maintained by PayPal's open source kraken.js team, which indicates that it might have some kind of corporate backing (always a good thing). swaggerize-hapi seems to be very similar to hapi-swagger, although the latter seems to provide more out-of-the-box functionality (mainly Swagger Editor).
Edit: Point 3. from your question and understanding what swagger-hapi actually does is very important. It does not directly serves the swagger-ui html. It is not intended to, but it is enabling the whole swagger idea (which the other projects in points 1. and 2. are actually a bit reversing). Please see below.
It turns out that when you are using swagger-node and swagger-hapi you do not need all the rest of the packages you mentioned, except for using swagger-ui directly (which is used by all the others anyways - they are wrapping it in their dependencies)
I want to share my understanding so far in this hapi/swagger puzzle, hope that these 8 hours that I spent can help others as well.
Libraries like hapi-swaggered, hapi-swaggered-ui, also hapi-swagger - All of them follow the same approach - that might be described like that:
You document your API while you are defining your routes
They are somewhat sitting aside from the main idea of swagger-node and the boilerplate hello_world project created with swagger-cli, which you mentioned that you use.
While swagger-node and swagger-hapi (NOTE that its different from hapi-swagger) are saying:
You define all your API documentation and routes **in a single centralized place - swagger.yaml**
and then you just focus on writing controller logic. The boilerplate project provided with swagger-cli is already exposing this centralized place swagger.yaml as json thru the /swagger endpoint.
Now, because the swagger-ui project which all the above packages are making use of for showing the UI, is just a bunch of static html - in order to use it, you have two options:
1) to self host this static html from within your app
2) to host it on a separate web app or even load the index.html directly from file system.
In both cases you just need to feed the swagger-ui with your swagger json - which as said above is already exposed by the /swagger endpoint.
The only caveat if you chose option 2) is that you need to enable cors for that end point which happened to be very easy. Just change your default.yaml, to also make use of the cors bagpipe. Please see this thread for how to do this.
As #Kitanotori said above, I also don't see the point of documenting the code programmatically. The idea of just describing everything in one place and making both the code and the documentation engine to understand it, is great.
We have used Inert, Vision, hapi-swagger.
server.ts
import * as Inert from '#hapi/inert';
import * as Vision from '#hapi/vision';
import Swagger from './plugins/swagger';
...
...
// hapi server setup
...
const plugins: any[] = [Inert, Vision, Swagger];
await server.register(plugins);
...
// other setup
./plugins/swagger
import * as HapiSwagger from 'hapi-swagger';
import * as Package from '../../package.json';
const swaggerOptions: HapiSwagger.RegisterOptions = {
info: {
title: 'Some title',
version: Package.version
}
};
export default {
plugin: HapiSwagger,
options: swaggerOptions
};
We are using Inert, Vision and hapi-swagger to build and host swagger documentation.
We load those plugins in exactly this order, do not configure Inert or Vision and only set basic properties like title in the hapi-swagger config.
I'm trying to programmatically create tasks/bugs on Maniphest: https://www.phacility.com/phabricator/maniphest/
but i can't quite seem to find a RESTful API that can do this.
am i totally missing out on something? or does there not currently exist one
Conduit (https://secure.phabricator.com/book/phabricator/article/conduit/) should work for you. There is a method called createtask (looks like https://secure.phabricator.com/conduit/method/maniphest.createtask/) that is exactly what you are looking for.
You can access phabricator's api, to create task ,query user info etc.
here is the demo for access https api by postman .
Note: for multi value field such as "ccPHIDs" , you should use format like the image.
but in conduit UI Test. you have to use json format, like this: ["PHID-PROJ-xxx3", "PHID-PROJ-xx12"]
phabricator's API is the wrost api set, I have used by now. so sick...
In WSO2 Enterprise Store 1.0.0 there is a lack of security on some aspects.
For example: several public files contain sensitive data as the location and clear password of keystores:
/store/config/publisher.json
/publisher/config/publisher.json
I'm still trying to figure why these data are needed on client side...
Is there any configuration setting to solve this issue?
You can solve this issue by adding following URL mapping to the jaggery.conf inside both publisher and store apps.
{
"url": "/config/*",
"path": "/"
}
Can anyone give a good comparison between:
https://github.com/ciaranj/connect-auth
and https://github.com/bnoguchi/everyauth
Which seem to be the only options for express/connect
I'm the author of everyauth.
I wrote everyauth because I found using connect-auth lacking in terms of:
Easy and powerful configuration
To get particular, one area where it was lacking in terms of configurability was setting facebook's "scope" security settings dynamically.
Good debugging support
I found connect-auth not-so-straightforward to debug in terms of pinpointing the part of the auth process was failing. This is mostly due to the way that connect-auth sets up its nested callbacks.
With everyauth, I tried to create a system that solved the issues I ran into with connect-auth.
On configuration - every everyauth auth module is defined as a series of steps and configurable parameters. As a result, you have surgery-like precision over what parameters (e.g., hostname, callback url, etc.) or steps (e.g., getAccessToken, addToSession) you want altered. With connect-auth, if you want to change one thing besides the few built in configurable parameters each auth strategy provides, you have to re-write the entire this.authenticate function that defines the logic for all of that strategy. In other words, it has less precision of configurability than everyauth. In other cases, you cannot use connect-auth, as is -- for example, achieving dynamic facebook "scope" configurability - i.e., asking users for more facebook permissions progressively as they get to portions of your app that require more permissions.
In addition to configurable steps and parameters, you can also take advantage of everyauth's auth module inheritance. All modules inherit prototypically from the everymodule auth module. All OAuth2-based modules inherit from the oauth2 module. Let's say you want all oauth2 modules to behave differently. All you need to do is modify the oauth2 auth module, and all oauth2-based modules will inherit that new behavior from it.
On debugging - everyauth, in my opinion, has better debugging visibility because of the very fact that it segments each module explicitly into the steps of which it is composed. By setting
everyauth.debug = true;
you get output of what steps in the authentication process have completed and which ones have failed. You also have granular control over how long each step in each auth strategy has before it times out.
On extensibility - I designed everyauth to maximize code re-use. As mentioned before, all modules inherit prototypically from the everymodule auth module. This means that you can achieve very modular systems while at the same time having fine grained control in terms of over-riding a specific step from an ancestor module. To see how easy it is to extend everyauth with your own auth module, just take a look at any of the specific auth modules in everyauth's source.
On readability - I find everyauth source easier to read in terms of what is going on. With connect-auth, I found myself jumping around the several files to understand under what contexts (i.e., during what steps, in everyauth parlance) each nested callback configured by a strategy was running. With everyauth, you know exactly what piece of logic is associated with which context (aka step). For instance, here is the code describing what happens when an oauth2 provider redirects back to your service:
.get('callbackPath',
'the callback path that the 3rd party OAuth provider redirects to after an OAuth authorization result - e.g., "/auth/facebook/callback"')
.step('getCode')
.description('retrieves a verifier code from the url query')
.accepts('req res')
.promises('code')
.canBreakTo('authCallbackErrorSteps')
.step('getAccessToken')
.accepts('code')
.promises('accessToken extra')
.step('fetchOAuthUser')
.accepts('accessToken')
.promises('oauthUser')
.step('getSession')
.accepts('req')
.promises('session')
.step('findOrCreateUser')
.accepts('session accessToken extra oauthUser')
.promises('user')
.step('compile')
.accepts('accessToken extra oauthUser user')
.promises('auth')
.step('addToSession')
.accepts('session auth')
.promises(null)
.step('sendResponse')
.accepts('res')
.promises(null)
Without needing to explain how this works, its pretty straightforward to see what an oauth2 strategy does. Want to know what getCode does? The source again is very straightforward:
.getCode( function (req, res) {
var parsedUrl = url.parse(req.url, true);
if (this._authCallbackDidErr(req)) {
return this.breakTo('authCallbackErrorSteps', req, res);
}
return parsedUrl.query && parsedUrl.query.code;
})
Compare this all to connect-auth's facebook code, which for me at least, took more time than I thought it should have to figure out what is getting executed when. This is mostly because of the way in which the code is spread across files and because of the use of a single authenticate method and nested callbacks to define authentication logic (everyauth uses promises to break down the logic into easily digestible steps):
that.authenticate= function(request, response, callback) {
//todo: makw the call timeout ....
var parsedUrl= url.parse(request.url, true);
var self= this;
if( parsedUrl.query && parsedUrl.query.code ) {
my._oAuth.getOAuthAccessToken(parsedUrl.query && parsedUrl.query.code ,
{redirect_uri: my._redirectUri}, function( error, access_token, refresh_token ){
if( error ) callback(error)
else {
request.session["access_token"]= access_token;
if( refresh_token ) request.session["refresh_token"]= refresh_token;
my._oAuth.getProtectedResource("https://graph.facebook.com/me", request.session["access_token"], function (error, data, response) {
if( error ) {
self.fail(callback);
}else {
self.success(JSON.parse(data), callback)
}
})
}
});
}
else {
request.session['facebook_redirect_url']= request.url;
var redirectUrl= my._oAuth.getAuthorizeUrl({redirect_uri : my._redirectUri, scope: my.scope })
self.redirect(response, redirectUrl, callback);
}
}
A few other differences:
everyauth supports traditional password-based authentication. connect-auth, as of this writing, does not.
The foursquare and google support in connect-auth is based on the older OAuth1.a spec. everyauth's foursquare and google modules are based on their implementations of the newer OAuth2 spec.
OpenId support in everyauth.
everyauth's documentation is much more comprehensive than connect-auth's.
Finally, to comment on Nathan's answer which was unsure about everyauth support for multiple providers at the same time, everyauth does support multiple, concurrent providers. The README on everyauth's github page provides instructions about how to use everyauth to achieve this.
To conclude, I think choice of libraries depends on the developer. I, and others, find everyauth more powerful from their point of view. As Nathan's answer illustrates, yet others find connect-auth more tuned in to their preferences. Whatever your choice, I hope this write-up about why I wrote everyauth helps you in your decision.
Both libraries are pretty close in feature sets, especially in terms of supported providers. connect-auth provides support out-of-the-box to make your own oAuth providers, so that could well help you out if you’re going to need that sort of thing.
The main thing I’ve noted between the two is that I find connect-auth much cleaner with the way it creates and accepts the middleware; you only have to look at the amount of pre-configuration required for the middleware in everyauth to see that it's going to get messy.
Another thing that isn't clear is whether everyauth supports multiple providers at the same time; with connect-auth, it seems possible/more straightforward, though I’ve yet to try this myself.
Thought I'd mention that there is now a new player in the town called PassportJS which features a lot of the same benefits as everyauth however authentication providers are provided by npm modules - so optin in rather than include everything.
I spent a morning playing with everyauth and mongoose-auth to find the examples were broke and they were dead projects. Here are the commit histories:
https:// github.com/jaredhanson/passport/commits/master - June 5th (it's June 18th)
https:// github.com/ciaranj/connect-auth/commits/master - April 18th (2 months ago)
https:// github.com/bnoguchi/mongoose-auth/commits/master - Feb 2012
Here's a google Trends of everyauth, connect-auth, and passportjs
http://www.google.com/trends/explore?q=passportjs%2C+connect-auth%2C+everyauth#q=passportjs%2C%20connect-auth%2C%20everyauth&cmpt=q
My experience with each:
everyauth is far more configurable. For example, I wish to handle my own sessions. With everyauth, with its modularity and introspection, a straightforward task. On the other hand have found everyauth filled with minor bugs and incomplete and/or incorrect documentation. For me, each authentication strategy has required its own understanding and troubleshooting.
passport might be a good bet if you're doing everything "by the book". But any deviation could make life very difficult very quickly, making it, for me, a non-starter.