Confidential Rest-Api w/ Permissions - Always 403s - What Am I Doing Wrong? - node.js

I've tried for many hours now and seem to have hit a wall. Any advice/help would be appreciated.
Goal: I want to authorize the express rest-api (ex client-id: "my-rest-api") routes (example resource: "WeatherForecast") across various HTTP methods mapped to client scopes (examples: "create"/"read"/"update"/"delete"). I want to control those permissions through policies (For example - "Read - WeatherForecast - Permission" will be granted if policy "Admin Group Only" (user belongs to admin group) is satisfied.
Rest-api will not log users in (will be done from front end talking directly to keycloak and then they will use that token to talk with rest-api).
Environment:
Keycloak 15.1.1 running in its own container, port 8080, on docker locally (w/ shared network with rest-api)
"my-rest-api": Nodejs 16.14.x w/ express 4.17.x server running on its own container on docker locally. Using keycloak-connect 15.1.1 and express-session 1.17.2.
Currently hitting "my-rest-api" through postman following this guide: https://keepgrowing.in/tools/kecloak-in-docker-7-how-to-authorize-requests-via-postman/
What Happens: I can login from keycloak login page through postman and get an access token. However when I hit any endpoint that uses keycloak.protect() or keycloak.enforce() (with or without specifying resource permissions) I can't get through. In the following code the delete endpoint returns back 200 + the HTML of the keycloak login page in postman and the Get returns back 403 + "Access Denied".
Current State of Realm
Test User (who I login with in Postman) has group "Admin".
Client "my-rest-api" with access-type: Confidential with Authorization enabled.
Authorization set up:
Policy Enforcement Mode: Enforcing, Decision Strategy: Unanimous
"WeatherForecast" resource with uri "/api/WeatherForecast" and create/read/update/delete client scopes applied.
"Only Admins Policy" for anyone in group admin. Logic positive.
Permission for each of the client scopes for "WeatherForecast" resource with "Only Admins Policy" selected, Decision Strategy: "Affirmative".
Current State of Nodejs Code:
import express from 'express';
import bodyParser from 'body-parser';
import session from "express-session";
import KeycloakConnect from 'keycloak-connect';
const app = express();
app.use(bodyParser.json());
const memoryStore = new session.MemoryStore();
app.use(session({
secret: 'some secret',
resave: false,
saveUninitialized: true,
store: memoryStore
}));
const kcConfig: any = {
clientId: 'my-rest-api',
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'my-realm',
};
const keycloak = new KeycloakConnect({ store: memoryStore }, kcConfig);
app.use(keycloak.middleware({
logout: '/logout',
admin: '/',
}));
app.get('/api/WeatherForecast', keycloak.enforcer(['WeatherForecast:read'],{ resource_server_id: "my-rest-api"}), function (req, res) {
res.json("GET worked")
});
app.delete('/api/WeatherForecast', keycloak.protect(), function (req, res) {
res.json("DELETE worked")
});
app.listen(8081, () => {
console.log(`server running on port 8081`);
});
A Few Other Things Tried:
I tried calling RPT endpoint with curl using token gotten from postman and got the RPT token perfectly fine, saw permissions as expected.
I tried calling keycloak.checkPermissions({permissions: [{id: "WeatherForecast", scopes: ["read"]}]}, req).then(grant => res.json(grant.access_token)); from inside an unsecured endpoint and got "Connection refused 127.0.0.1:8080".
I tried just disabling Policy Enforcement Mode just to see, still got Access Denied/403.
I tried using keycloak.json config instead of object method above - same exact results either way.
I tried openid-client (from another tutorial) and also got connected refused issues.
I've tried using docker host ip, host.docker.internal, the container name, etc. to no avail (even though I don't think it is an issue as I obviously can hit the auth service and get the first access token).
I really want to use Keycloak and I feel like my team is so close to being able to do so but need some assistance getting past this part. Thank you!
------------------- END ORIGINAL QUESTION ------------------------
EDIT/UPDATE #1:
Alright so a couple more hours sank into this. Decided to read through every line of keycloak-connect library that it hits and debug as it goes. Found it fails inside keycloak-connect/middleware/auth-utils/grant-manager.js on the last line of checkPermissions. No error is displayed or catch block to debug on - chasing the rabbit hole down further I was able to find it occurs in the fetch method that uses http with options:
'{"protocol":"http:","slashes":true,"auth":null,"host":"localhost:8080","port":"8080","hostname":"localhost","hash":null,"search":null,"query":null,"pathname":"/auth/realms/my-realm/protocol/openid-connect/token","path":"/auth/realms/my-realm/protocol/openid-connect/token","href":"http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/token","headers":{"Content-Type":"application/x-www-form-urlencoded","X-Client":"keycloak-nodejs-connect","Authorization":"Basic YW(etc...)Z2dP","Content-Length":1498},"method":"POST"}'
It does not appear to get into the callback of that fetch/http wrapper. I added NODE_DEBUG=http to my start up command and was able to find that swallowed error, which appears I am back to the starting line:
HTTP 31: SOCKET ERROR: connect ECONNREFUSED 127.0.0.1:8080 Error: connect ECONNREFUSED 127.0.0.1:8080
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)
I then saw something that I thought may be related due to my docker network set up (Keycloak and Spring Boot web app in dockerized environment) and tried change host name dns so I could use something other then local host but it didn't work either (even added to redirect uri, etc.).
UPDATE #2:
Alright so I got the keycloak.protect() (pure authentication) endpoint working now. I found through reading through the keycloak-connect lib code more options and it seems that adding "realmPublicKey" to the keycloak config object when instantiating keycloak-connect fixed that one. Still no luck yet on the authorization keycloak.enforce side.
const kcConfig: any = {
clientId: 'my-rest-api',
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'my-realm',
realmPublicKey : "MIIBIjANBgk (...etc) uQIDAQAB",
};

So my team finally figured it out - the resolution was a two part process:
Followed the instructions on similar issue stackoverflow question answers such as : https://stackoverflow.com/a/51878212/5117487
Rough steps incase that link is ever broken somehow:
Add hosts entry for 127.0.0.1 keycloak (if 'keycloak' is the name of your docker container for keycloak, I changed my docker-compose to specify container name to make it a little more fool-proof)
Change keycloak-connect config authServerUrl setting to be: 'http://keycloak:8080/auth/' instead of 'http://localhost:8080/auth/'
Postman OAuth 2.0 token request Auth URL and Access Token URL changed to use the now updated hosts entry:
"http://localhost:8080/auth/realms/abra/protocol/openid-connect/auth" -> "http://keycloak:8080/auth/realms/abra/protocol/openid-connect/auth"
"http://localhost:8080/auth/realms/abra/protocol/openid-connect/token" ->
"http://keycloak:8080/auth/realms/abra/protocol/openid-connect/token"

Related

Auth0 & Express-openid-connect /callback not responding

I have a simple node express application running on port 3002 (because 3000 is already used). For logging users in I use Auth0 and the Express-openid-connect package.
Every time I try to log in I get stuck at a blanc page called: Submit This Form, but it doesn't stop loading. The logs in Auth0 always show: successfull login. I can however access non login-protected routes without a problem.
The /callback route throws the following error:
Error: Request aborted
at IncomingMessage.<anonymous> (/workspace/node_modules/formidable/lib/incoming_form.js:122:19)
at IncomingMessage.emit (events.js:400:28)
at abortIncoming (_http_server.js:569:9)
at socketOnEnd (_http_server.js:585:5)
at Socket.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
but I think that's what it is supposed to do when a request is aborted, right?
My Auth0 setup should be right since it doesn't result in an error.
My Express-openid-connect middleware looks linke this:
require('dotenv').config()
const { auth } = require('express-openid-connect')
module.exports = auth({
authRequired: false,
issuerBaseURL: process.env.AUTH0_ISSUER_URL,
baseURL: process.env.BASE_URL,
clientID: process.env.AUTH0_CLIENT_ID,
secret: process.env.AUTH0_SECRET
})
and the env variables:
BASE_URL=http://localhost:3002
AUTH0_ISSUER_URL=...
AUTH0_CLIENT_ID=...
AUTH0_SECRET=H+F/0yW/i4X11EDzAFBZE2iaUTy4jBMo3gBWwXRkoY8W3DJ+E24tnt8Q5y+rF7QO
AUTH0_SECRET is just a random string but I tried it with the client secret provided by Auth0 and it changed nothing.
AUTH0_ISSUER_URL is the correct url since every request shows up in the logs.
AUTH0_CLIENT_ID is the correct client id since the logs show the correct application.
If I manually go to localhost:3002/callback I get the following expected error:
BadRequestError: state missing from the response
at /workspace/node_modules/express-openid-connect/middleware/auth.js:121:31
at processTicksAndRejections (internal/process/task_queues.js:95:5)
This means that my /callback route should be reachable and working.
However I can manually copy the parameters and then everything works however this process isn't usefull for production.
I even tried to submit the form manually over the console
document.forms[0].submit()
but it returned undefined and the page didn't stop loading once again.
So for me there is something wrong with the submit() function of that form.
My understanding of this is that Express OpenID Connect wraps and handles most of the things so developers would not have trouble to setup the protocol flow. Meaning, you would not need to access /callback by yourself. auth from express OpenID Connect, once it is added in express app, provides our express app /login, /callback and /logout routes. More you can find here https://auth0.com/docs/quickstart/webapp/express.
There are 2 points, in your question that i see, which shall be discussed.
The logs in Auth0 always show: successfull login. I can however access non login-protected routes without a problem.
You can access your routes because you are logged in. How you can log out? You need auth0Logout: true, in configuration object for auth function. Logout route is provided under the hood by Express openID Connect auth method https://auth0.com/docs/quickstart/webapp/express/01-login#logout
You should not call /callback by yourself, instead it is part of the flow which is provided by the library (express openID Connect). You can download their sample project and try it yourself: https://github.com/auth0-samples/auth0-express-webapp-sample/tree/master/01-Login. You can see there that /callback route is not added by sample project code. Same goes for /login and /logout routes.
Screenshots after sample project from the link is run:
when login is clicked sequence is like this: /login, /callback, and then /
in network tab you can see more

nodejs keycloak-connect-graphql not working while in docker

I have hit an issue that I have struggled to figure out for the last little while regarding docker. Here is a shortened version of the story
In my development environment (everything is running on localhost), code works great, whenever I add my authorization token in my headers keycloak-connect detects it and works as usual. The problem occurs when I use it in docker and add custom network interfaces.
When the docker container boots up, I have it connect to keycloak via http://keycloak:8080/auth and generate the realm. That works fine, not an issue, thus I know that the keycloak network is up and running as expected. I am able to remote into the container, communicate with keycloak, get a token and etc...
In my graphql application, I use keycloak-connect-graphql to set the context of the graphql application which in turn uses keycloak-connect to set up all the headers. The problem is, that keycloak-connect-graphql tells me that there is no header.If I simply print out the request, I can clearly tell that there is a token being passed in, it's just that for some reason, keycloak-connect-graphql/keycloak-connect does not want to set it because I am using a different network besides localhost.
I was actually able to side-step this problem in production by setting the keycloak url to be a the public url (https://keycloak.DOMAINNAME.com) which makes absolutely no sense to me because http://keycloak:8080/auth does not work. I have digged through keycloak-connect and the keycloak-connect-graphql code and I could not find anything relating to a CORS issue or something else suspicious. If anyone has any ideas please let me know. This bug has been driving me crazy.
Code snippet:
keycloak configuration (app.config)
keycloak: {
realm: process.env.KEYCLOAK_REALM,
'auth-server-url': 'http://keycloak:8080/auth',
'ssl-required': 'none',
resource: process.env.KEYCLOAK_RESOURCE,
'public-client': true,
'use-resource-role-mappings': true,
'confidential-port': 0,
},
configurekeycloak.js
function configureKeycloak(app, graphqlPath) {
const keycloakConfig = require('../config/app.config').keycloak;
const memoryStore = new session.MemoryStore();
app.use(
session({
secret:
process.env.SESSION_SECRET_STRING || 'this should be a long secret',
resave: false,
saveUninitialized: true,
store: memoryStore,
}),
);
const keycloak = new Keycloak(
{
store: memoryStore,
},
keycloakConfig,
);
// Install general keycloak middleware
app.use(
keycloak.middleware({
admin: graphqlPath,
}),
);
// Protect the main route for all graphql services
// Disable unauthenticated access
app.use(graphqlPath, keycloak.middleware());
return { keycloak };
}
index.js
// perform the standard keycloak-connect middleware setup on our app
const { keycloak } = configureKeycloak(app, graphqlPath);
// Ensure entire GraphQL Api can only be accessed by authenticated users
app.use(playgroundPath, keycloak.protect());
const server = new ApolloServer({
gateway,
// uploads: false,
// Apollo Graph Manager (previously known as Apollo Engine)
// When enabled and an `ENGINE_API_KEY` is set in the environment,
// provides metrics, schema management and trace reporting.
engine: false,
// Subscriptions are unsupported but planned for a future Gateway version.
subscriptions: false,
// Disable default playground
playground: true,
context: ({ req }) => {
return {
kauth: new KeycloakContext({ req }),
};
},
// Tracing must be enabled for this plugin to add the headers
tracing: true,
// Register the plugin
plugins: [ServerTimingPlugin],
});
Edit:
setting network_mode to host fixes the issue, but I much prefer using dockers integrated networks rather than using network_mode host, especially for production
Edit 2:
I was able to reproduce the error in my local development environment (without shelling into docker and then debugging from there). I was able to do this buy using the IP address provided by docker. This yields the same results. I have narrowed the issue down to keycloak-connect, which for some reason, does no want to connect to keycloak or something. Still not sure

Keycloak always redirecting to login page

I´m using a keycloak instance to login in the frontend and secure the backend-api. After deployment on a linux machine on aws I faced a issue. I´m getting constantly redirected to the login page by accessing the api with a jwt token. Locally it´s working fine.
My client is a confidential client. I´m using client_id and _secret to authorize for the token call. The jwt token is valid and sucessfully generated.
My implementation of the api works with expressJs and the keycloak-nodejs-connector:
keycloakConfig = {
serverUrl: 'https://keycloak.myserver.com/auth',
realm: 'examplerealm',
clientId: 'ui-client'
};
public init() {
if (this.keycloak) {
console.warn("Trying to init Keycloak again!");
return this.keycloak;
}
else {
console.log("Initializing Keycloak...");
const memoryStore = new session.MemoryStore();
// #ts-ignore
this.keycloak = new Keycloak({ store: memoryStore }, this.keycloakConfig );
return this.keycloak;
}
}
I could imagine that it is dependent on the current https setting. My nodejs api provides a endpoint for http and https (locally with a self signed certificate). On the server, where keycloak is running, I added a letsencrypt certificate with certbot and everything looks fine in the browser.
Keycloak is started with the docker-container jboss/keycloak.
I´m curious to figure out my current issue and help is very appreciated :slight_smile: Let me know, if I missed to add necessary informations.
Thanks in advance.
Dominik
I found a solution for this.
First I updated to the latest version of keycloak-connect. They provided a new major version 12 and it seems there was a change about the configuration.
Second there was a issue with the configuration. I digged into the current config object and figured out, that it should look like this:
keycloakConfig = {
realm: 'test-realm',
authServerUrl: 'https://myurl/auth/',
realmPublicKey: 'key'
};

Not able to set cookie from the express app hosted on Heroku

I have hosted both frontend and backend in Heroku.
Frontend - xxxxxx.herokuapp.com (react app)
Backend - yyyyyy.herokuapp.com (express)
I'm trying to implement Google authentication. After getting the token from Google OAuth2, I'm trying to set the id_token and user details in the cookie through the express app.
Below is the piece of code that I have in the backend,
authRouter.get('/token', async (req, res) => {
try {
const result = await getToken(String(req.query.code))
const { id_token, userId, name, exp } = result;
const cookieConfig = { domain: '.herokuapp.com', expires: new Date(exp * 1000), secure: true }
res.status(201)
.cookie('auth_token', id_token, {
httpOnly: true,
...cookieConfig
})
.cookie('user_id', userId, cookieConfig)
.cookie('user_name', name, cookieConfig)
.send("Login succeeded")
} catch (err) {
res.status(401).send("Login failed");
}
});
It is working perfectly for me on my local but it is not working on heroku.
These are the domains I tried out already - .herokuapp.com herokuapp.com. Also, I tried out without specifying the domain field itself.
I can see the Set-Cookie details on the response headers but the /token endpoint is failing without returning any status code and I can't see the cookies set on the application tab.
Please see the below images,
I can't see any status code here but it says it is failed.
These are cookie information that I can see but it is not available if I check via application tab.
What am I missing here? Could someone help me?
May you should try secure as:
secure: req.secure || req.headers['x-forwarded-proto'] === 'https'
You are right, this should technically work.
Except that if it did work, this could lead to a massive security breach since anyone able to create a Heroku subdomain could generate a session cookie for all other subdomains.
It's not only a security issue for Heroku but also for any other service that lets you have a subdomain.
This is why a list of domains has been created and been maintained since then to list public domains where cookies should not be shared amongst the subdomains. This list is usually used by browsers.
As you can imagine, the domain heroku.com is part of this list.
If you want to know more, this list is known as the Mozilla Foundation’s Public Suffix List.

Problems connecting front-end app with the server

I have an Angular app running in gh-pages https://yourweatherapp.github.io/yourweatherapp.github.io which make requests to a node.js app that it´s running in a host.
Previously, I have consulted other info on the net like this How to allow CORS?, but solutions don´t work for me
I have configured the node.js app to allow request from this origin on this way:
const corsMiddleware = cors({
origin: [process.env.URL, 'https://yourweatherapp.github.io/yourweatherapp.github.io/login']
})
app.use(corsMiddleware)
app.options('*', corsMiddleware)
But the browser doesn´t allow to receive the answer and do login.
What am I doing wrong?
'https://yourweatherapp.github.io/yourweatherapp.github.io/login'
Look at the error message in the browser console. It will tell you the origin that doesn't have permission to read the data, and it won't be that.
Origins do not include paths. So that is not a valid origin.
It should be only https://yourweatherapp.github.io

Resources