Invalid state on azure, but working locally - node.js

I have an Azure Active Directory tenant that I wish to authenticate with from my Node.js application running on an Azure App Service instance. I'm using passportjs and passport-azure-ad to do this.
Locally everything works fine. I can authenticate with the Azure AD tenant and it returns back to my page correctly. However on Azure it fails with the error:
authentication failed due to: In collectInfoFromReq: invalid state received in the request
My configuration is exactly the same (apart from redirectUrl) as I'm using the same tenant for local testing as well as in Azure yet it still fails. I've set up the proper reply urls and the authentication returns back to my application.
Here is my config:
{
identityMetadata: `https://login.microsoftonline.com/${tenantId}/.well-known/openid-configuration`,
clientID: `${clientId}`,
responseType: 'id_token',
responseMode: 'form_post',
redirectUrl: 'https://localhost:3000/auth/oidc/return',
allowHttpForRedirectUrl: false,
scope: [ 'openid' ],
isB2C: false,
passReqToCallback: true,
loggingLevel: 'info'
}
I'm using the OIDCStrategy.
My authentication middleware:
passport.authenticate('azuread-openidconnect', {
response: res,
failureRedirect: '/auth/error',
customState: '/'
});
I've compared the encoded state on the authorizerequest vs the returned response and they differ in the same way locally as well as on Azure, yet Azure is the only one complaining. Examples of how the states differ:
Azure:
Request state: CUSTOMEwAuZcY7VypgbKQlwlUHwyO18lnzaYGt%20
Response state: CUSTOMEwAuZcY7VypgbKQlwlUHwyO18lnzaYGt
localhost:
Request state: CUSTOMTAYOz2pBQt332oKkJDGqRKs_wAo90Pny%2F
Response state: CUSTOMTAYOz2pBQt332oKkJDGqRKs_wAo90Pny/
I've also tried removing customState completely yet it still fails.
Anyone know what's going on here? Am I configuring it incorrectly?
Edit: It appears that this may not be an issue with passport-azure-ad. I'm not sure yet, but some debugging revealed that there is no set-cookie header on the login request to my app. The session is created, but no cookie is set thus the returning response is unable to look up the session info including the state and compare them. The result is that it reports invalid state since it's unable to retrieve data from the session.

Turns out the problem was that the session was never properly created thus there was no state for process-azure-ad to compare. The reason for this was that I had configured express-session to use secure session cookies under the assumption that since I was connecting through the https://...azurewebsites.net address the connection was secure. This is not technically the case though.
Azure runs a load balancer in front of the Web Application effectively proxying connections from the outside to my app. This proxy is where the secure connection is terminated and then traffic is routed unencrypted to my application.
Browser -(HTTPS)> Load balancer -(HTTP)> Application
The result is that node did not report the connection as secure unless a set the configuration option trust proxy:
app.set('trust proxy', true);
When this option is set express will check the X-Forwarded-Proto header for which protocol was used to connect to the proxy server (in this case the load balancer). This header contains either http or https depending on the connection protocol.
For Azure though this is still not sufficient. The Azure load balancer does not set the X-Forwarded-Proto header either. Instead it uses x-arr-ssl. This is not a big problem though as iisnode (the runtime I'm using to run node on IIS in Azure) has an option called enableXFF that will update the X-Forwarded-Proto header based on the external protocol of the connection. Setting both these options enables express-session to set the secure cookie keeping the session stored and allowing passport-azure-ad to store and compare state information on authentication.
PS: Big thanks to Scott Smiths blog + comments for providing the answer:
http://scottksmith.com/blog/2014/08/22/using-secure-cookies-in-node-on-azure/

This is a known encode issue with module passport-azure-ad. See:
"State" gets encoded and causes "collectInfoFromReq: invalid state received" #309
"invalid state received in the request" causing infinite loop on Login #247
You could upgrade the module version to v3.0.7 or a newer one to fix it.

Related

How can I configure Identity Server to correctly validate JWT tokens after an Azure App Slot Switch?

I have a core hosted API which uses IdentityServer and issues JwtBearer tokens to my desktop based client. This is generally working and I can log in and use the application as expected.
However, when using Azure Deployment Slots I run into problems due to the Issuer validation.
When swapping slots, azure doesn't swap the running code, but rather just swaps the pointing urls so that a warmed up running app is ready to serve requests immediately. However, the Identity Server implemenation seems to keep a reference to the OLD slot URL and use this as part of the Issuer Validation.
This means that once the slots are swapped, not only are all the clients effectively logged out (which is bad enough), but then when the client logs in again the token that the Identity Server supplies isn't even valid becuase it's got the wrong URI for the issuer.
This results in the error:
Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler:
IdentityServerJwtBearer was not authenticated. Failure message:
IDX10205: Issuer validation failed. Issuer:
'https://my-app-name.azurewebsites.net'. Did not match:
validationParameters.ValidIssuer:
'https://my-app-name-deployment.azurewebsites.net' or
validationParameters.ValidIssuers: 'null'.
I have tried to disable Issuer Validation by doing this in the setup:
services.AddAuthentication(options =>
{
})
.AddCookie()
.AddJwtBearer(options =>
{
options.TokenValidationParameters.ValidateIssuer = false;
})
.AddIdentityServerJwt();
However this doesn't seem to make any difference. Am I setting this variable in the wrong place? Are there other settings that need to be also configured as well as this to bypass this check?
I have also tried setting the list of ValidIssuers to include both 'https://my-app-name.azurewebsites.net' and 'https://my-app-name-deployment.azurewebsites.net' in the hopes that either one would be accepted and allow tokens to be validated by either slot, but again this seems to make no difference.
Alternatively, Is there a way to pervent IdentityServer caching the Issuer Url - or a way to flush that cache without restarting the application? Currently once the slots are swapped the only way I can get the desktop application to access the API is to restart the API application and then log in again aftwards (just logging out and logging in still results in a token that the server cannot validate, even though the server just issued it).
I feel like I must be missing something glaringly obvious, but I can't see what it is...
To configure the IdentityServer JWT Bearer you can use a configure call:
services.Configure<JwtBearerOptions>(IdentityServerJwtConstants.IdentityServerJwtBearerScheme,
options =>
{
options.TokenValidationParameters.ValidateIssuer = false;
});
This way it the ValidateIssuer flag is set on the IdentityServerJwt rather than the original code which set up a new JwtBearer which was entirely seperate from the IdentityServer one.
Once the configuration is being set on the correct service it is also possible to use the array of ValidIssuers to include the Asure Slot Urls that should be accepted.

What is the best way to call an authenticated HTTP Cloud Function from Node JS app deployed in GCP?

We have an authenticated HTTP cloud function (CF). The endpoint for this CF is public but because it is authenticated, it requires a valid identity token (id_token) to be added to the Authorization header.
We have another Node JS application that is deployed in the same Google Cloud. What we want is to call the CF from the Node application, for which we will be needing a valid id token.
The GCP documentation for authentication is too generic and does not have anything for such kind of scenario.
So what is the best way to achieve this?
Note
Like every google Kubernetes deployment, the node application has a service account attached to it which already has cloud function invoker access.
Follow Up
Before posting the question here I had already followed the same approach as #guillaume mentioned in his answer.
In my current code, I am hitting the metadata server from the Node JS application to get an id_token, and then I am sending the id_token in a header Authorization: 'Bearer [id_token]' to the CF HTTP request.
However, I am getting a 403 forbidden when I do that. I am not sure why??
I can verify the id_token fetched from the metadata server with the following endpoint.
https://www.googleapis.com/oauth2/v1/tokeninfo?id_token=[id_token]
It's a valid one.
And it has the following fields.
Decoding the id_token in https://jwt.io/ shows the same field in the payload.
{
"issued_to": "XXX",
"audience": "[CLOUD_FUNTION_URL]",
"user_id": "XXX",
"expires_in": 3570,
"issuer": "https://accounts.google.com",
"issued_at": 1610010647
}
There is no service account email field!
You have what you need in the documentation but I agree, it's not clear. It's named function-to-function authentication.
In fact, because the metadata server is deployed on each computes element on Google Cloud, you can reuse this solution everywhere (or almost everywhere! You can't generate an id_token on Cloud Build, I wrote an article and a workaround on this)
This article provides also a great workaround for local testing (because you don't have metadata server on your computer!)

TAI for MS Azure with Websphere Application Server v9 CWWSS8017E: Authentication Error

I'm trying to configure SAML between MS Azure AD and a WebSphere v9 CF11 server that's sitting in AWS. But it is not recognizing the TAI set up
I've followed all the steps here: https://www.ibm.com/support/knowledgecenter/en/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/tsec_enable_saml_sp_sso.html and here https://www.ibm.com/support/knowledgecenter/en/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/twbs_configuresamlssopartners.html
I've installed the SAMLSA app in WebSphere, imported the metadata file provided by my Azure admin, and imported the certificate as well. I've set up the ACSTrustAssociationInterceptor interceptor and put in (what I thought was) the right sso_1.sp.acsUrl and other settings for the server.
The SystemOut logs show that the ACSTrustAssociationInterceptor is loading:
SECJ0121I: Trust Association Init class com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor loaded successfully
but the version is null:
SECJ0122I: Trust Association Init Interceptor signature:
After setting it all up as above, when I go to the URL it just shows:
Error 403: AuthenticationFailed
And the log has errors about a missing cookie:
SECJ0126E: Trust Association failed during validation. The exception is com.ibm.websphere.security.WebTrustAssociationFailedException: CWWSS8017E: Authentication Error: Single-Sign-on cookie is not present or could not be verified. Please login to the SAML Identity Provider, and try again.
It's like it's never "intercepted" to be passed. Just fails. No network traffic goes to the AD server
When going to the URL it should redirect me to the MS Login and then back to the app, but it's not
It sounds like you might be missing an sso_1.sp.login.error.page property definition. Without that property, the expectation is that the user will be going to the IdP to initiate the sign on; if you define the property and set its value to your IdP's login page, then the 403 you're getting (as a result of being unauthenticated) will end up redirecting you over to the IdP to initiate the sign on process from there.
More info here in the "bookmark style" description: https://www.ibm.com/support/knowledgecenter/en/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/cwbs_samlssosummary.html

API Gateway - ALB: Hostname/IP doesn't match certificate's altnames

My setup currently looks like:
API Gateway --- ALB --- ECS Cluster --- NodeJS Applications
|
-- Lambda
I also have a custom domain name set on API Gateway (UPDATE: I used the default API gateway link and got the same problem, I don't think this is a custom domain issue)
When 1 service in ECS cluster calls another service via API gateway, I get
Hostname/IP doesn't match certificate's altnames: "Host: someid.ap-southeast-1.elb.amazonaws.com. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com"
Why is this?
UPDATE
I notice when I start a local server that calls the API gateway I get a similar error:
{
"error": "Hostname/IP doesn't match certificate's altnames: \"Host: localhost. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com\""
}
And if I try to disable the HTTPS check:
const response = await axios({
method: req.method,
url,
baseURL,
params: req.params,
query: req.query,
data: body || req.body,
headers: req.headers,
httpsAgent: new https.Agent({
: false // <<=== HERE!
})
})
I get this instead ...
{
"message": "Forbidden"
}
When I call the underlying API gateway URL directly on Postman it works ... somehow it reminds me of CORS, where the server seems to be blocking my server either localhost or ECS/ELB from accessing my API gateway?
It maybe quite confusing so a summary of what I tried:
In the existing setup, services inside ECS may call another via API gateway. When that happens it fails because of the HTTPS error
To resolve it, I set rejectUnauthorized: false, but API gateway returns HTTP 403
When running on localhost, the error is similar
I tried calling ELB instead of API gateway, it works ...
There are various workarounds, which introduce security implications, instead of providing a proper solution. in order to fix it, you need to add a CNAME entry for someid.ap-southeast-1.elb.amazonaws.com. to the DNS (this entry might already exists) and also to one SSL certificate, alike it is being described in the AWS documentation for Adding an Alternate Domain Name. this can be done with the CloudFront console & ACM. the point is, that with the current certificate, that alternate (internal !!) host-name will never match the certificate, which only can cover a single IP - therefore it's much more of an infrastructural problem, than it would be a code problem.
When reviewing it once again... instead of extending the SSL certificate of the public-facing interface - a better solution might be to use a separate SSL certificate, for the communication in between the API Gateway and the ALB, according to this guide; even self-signed is possible in this case, because the certificate would never been accessed by any external client.
Concerning that HTTP403 the docs read:
You configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked a request.
I hope this helps setting up end-to-end encryption, while only the one public-facing interface of the API gateway needs a CA certificate, for whatever internal communication, self-signed should suffice.
This article is about the difference in between ELB and ALB - while it might be worth a consideration, if indeed the most suitable load-balancer for the given scenario had been chosen. in case no content-based routing is required, cutting down on useless complexity might be helpful. this would eliminate the need to define the routing rules ...which you should also review once, in case sticking to ALB. I mean, the questions only shows the basic scenario and some code which fails, but not the routing rules.

401 Authorization Required integrating Hyperledger Composer REST API from Webapp

Introduction
I have a hyperledger env running in secure mode by following this link https://hyperledger.github.io/composer/integrating/enabling-rest-authentication.html
and it works fine if I authenticate as specified in the document (hitting http://mydomain:3000/auth/github directly from the browser) and then access the Rest API from the http://mydomain:3000/explorer and could authorize as various participants (i.e, issuing identity and adding them to the wallet and setting one as default at a time) and could see the assets as per the .acl file.
Issue
But I started facing problems when I started integrating the Rest API's from my web application rather directly from the browser. As a first step from my web app, I called the http://mydomain:3000/auth/github to authenticate and then started calling the other APIs (transaction/list, etc.) but I do always get
Error 401: 'Authorization Required'
What i have tried
Gave my web application URL as the 'Redirect URL' in the env variable for the hyperledger. And upon successful authentication (calling http://mydomain:3000/auth/github) it successfully redirected to my webapp home page but afterwards accessing the Rest API's (from web app) again throws 'Authorization Required' error.
Environment variaable as below:
export COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "CLIENT_ID",
"clientSecret": "CLIENT_SECRET",
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "http://localhost:8080/home.html",
"failureRedirect": "/"
}
}'
Incorporated passport-github2 mechanism in my web application (i.e, registered my app with the oauth of github) and upon successful login to my web application; called the http://mydomain:3000/auth/github to authenticate to the blockchain and it did not work out as well.
I have a few questions:
Is it feasible to call the secured hyperledger Rest API's from another web application?
If Yes, how to do it? I don't find that information in the hyperledger composer documentation.
Have been trying this for a week now and have no answers. Any help would be greatly appreciated. Please let me know if anything is unclear. Thanks.
I commented about this problem on one of the existing hyperledger github issues(below link) & I want to share the solution that solved this problem for me.
https://github.com/hyperledger/composer/issues/142
Solution: as mentioned by user sstone1
Since the REST server is on a different port number to your web application, you need to specify an additional option to your HTTP client to pass the cookies to the REST server. Using the Angular HTTP client, you add the withCredentials flag, for example:
via Angular:
this.http.get('http://mydomain:3000/api/MyAsset', { withCredentials: true })
via JQuery AJAX:
$.ajax({
url: 'http://mydomain:3000/api/MyAsset',
xhrFields: {
withCredentials: true
},
headers: {
...
}
})

Resources