Keycloak always redirecting to login page - node.js

I´m using a keycloak instance to login in the frontend and secure the backend-api. After deployment on a linux machine on aws I faced a issue. I´m getting constantly redirected to the login page by accessing the api with a jwt token. Locally it´s working fine.
My client is a confidential client. I´m using client_id and _secret to authorize for the token call. The jwt token is valid and sucessfully generated.
My implementation of the api works with expressJs and the keycloak-nodejs-connector:
keycloakConfig = {
serverUrl: 'https://keycloak.myserver.com/auth',
realm: 'examplerealm',
clientId: 'ui-client'
};
public init() {
if (this.keycloak) {
console.warn("Trying to init Keycloak again!");
return this.keycloak;
}
else {
console.log("Initializing Keycloak...");
const memoryStore = new session.MemoryStore();
// #ts-ignore
this.keycloak = new Keycloak({ store: memoryStore }, this.keycloakConfig );
return this.keycloak;
}
}
I could imagine that it is dependent on the current https setting. My nodejs api provides a endpoint for http and https (locally with a self signed certificate). On the server, where keycloak is running, I added a letsencrypt certificate with certbot and everything looks fine in the browser.
Keycloak is started with the docker-container jboss/keycloak.
I´m curious to figure out my current issue and help is very appreciated :slight_smile: Let me know, if I missed to add necessary informations.
Thanks in advance.
Dominik

I found a solution for this.
First I updated to the latest version of keycloak-connect. They provided a new major version 12 and it seems there was a change about the configuration.
Second there was a issue with the configuration. I digged into the current config object and figured out, that it should look like this:
keycloakConfig = {
realm: 'test-realm',
authServerUrl: 'https://myurl/auth/',
realmPublicKey: 'key'
};

Related

Confidential Rest-Api w/ Permissions - Always 403s - What Am I Doing Wrong?

I've tried for many hours now and seem to have hit a wall. Any advice/help would be appreciated.
Goal: I want to authorize the express rest-api (ex client-id: "my-rest-api") routes (example resource: "WeatherForecast") across various HTTP methods mapped to client scopes (examples: "create"/"read"/"update"/"delete"). I want to control those permissions through policies (For example - "Read - WeatherForecast - Permission" will be granted if policy "Admin Group Only" (user belongs to admin group) is satisfied.
Rest-api will not log users in (will be done from front end talking directly to keycloak and then they will use that token to talk with rest-api).
Environment:
Keycloak 15.1.1 running in its own container, port 8080, on docker locally (w/ shared network with rest-api)
"my-rest-api": Nodejs 16.14.x w/ express 4.17.x server running on its own container on docker locally. Using keycloak-connect 15.1.1 and express-session 1.17.2.
Currently hitting "my-rest-api" through postman following this guide: https://keepgrowing.in/tools/kecloak-in-docker-7-how-to-authorize-requests-via-postman/
What Happens: I can login from keycloak login page through postman and get an access token. However when I hit any endpoint that uses keycloak.protect() or keycloak.enforce() (with or without specifying resource permissions) I can't get through. In the following code the delete endpoint returns back 200 + the HTML of the keycloak login page in postman and the Get returns back 403 + "Access Denied".
Current State of Realm
Test User (who I login with in Postman) has group "Admin".
Client "my-rest-api" with access-type: Confidential with Authorization enabled.
Authorization set up:
Policy Enforcement Mode: Enforcing, Decision Strategy: Unanimous
"WeatherForecast" resource with uri "/api/WeatherForecast" and create/read/update/delete client scopes applied.
"Only Admins Policy" for anyone in group admin. Logic positive.
Permission for each of the client scopes for "WeatherForecast" resource with "Only Admins Policy" selected, Decision Strategy: "Affirmative".
Current State of Nodejs Code:
import express from 'express';
import bodyParser from 'body-parser';
import session from "express-session";
import KeycloakConnect from 'keycloak-connect';
const app = express();
app.use(bodyParser.json());
const memoryStore = new session.MemoryStore();
app.use(session({
secret: 'some secret',
resave: false,
saveUninitialized: true,
store: memoryStore
}));
const kcConfig: any = {
clientId: 'my-rest-api',
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'my-realm',
};
const keycloak = new KeycloakConnect({ store: memoryStore }, kcConfig);
app.use(keycloak.middleware({
logout: '/logout',
admin: '/',
}));
app.get('/api/WeatherForecast', keycloak.enforcer(['WeatherForecast:read'],{ resource_server_id: "my-rest-api"}), function (req, res) {
res.json("GET worked")
});
app.delete('/api/WeatherForecast', keycloak.protect(), function (req, res) {
res.json("DELETE worked")
});
app.listen(8081, () => {
console.log(`server running on port 8081`);
});
A Few Other Things Tried:
I tried calling RPT endpoint with curl using token gotten from postman and got the RPT token perfectly fine, saw permissions as expected.
I tried calling keycloak.checkPermissions({permissions: [{id: "WeatherForecast", scopes: ["read"]}]}, req).then(grant => res.json(grant.access_token)); from inside an unsecured endpoint and got "Connection refused 127.0.0.1:8080".
I tried just disabling Policy Enforcement Mode just to see, still got Access Denied/403.
I tried using keycloak.json config instead of object method above - same exact results either way.
I tried openid-client (from another tutorial) and also got connected refused issues.
I've tried using docker host ip, host.docker.internal, the container name, etc. to no avail (even though I don't think it is an issue as I obviously can hit the auth service and get the first access token).
I really want to use Keycloak and I feel like my team is so close to being able to do so but need some assistance getting past this part. Thank you!
------------------- END ORIGINAL QUESTION ------------------------
EDIT/UPDATE #1:
Alright so a couple more hours sank into this. Decided to read through every line of keycloak-connect library that it hits and debug as it goes. Found it fails inside keycloak-connect/middleware/auth-utils/grant-manager.js on the last line of checkPermissions. No error is displayed or catch block to debug on - chasing the rabbit hole down further I was able to find it occurs in the fetch method that uses http with options:
'{"protocol":"http:","slashes":true,"auth":null,"host":"localhost:8080","port":"8080","hostname":"localhost","hash":null,"search":null,"query":null,"pathname":"/auth/realms/my-realm/protocol/openid-connect/token","path":"/auth/realms/my-realm/protocol/openid-connect/token","href":"http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/token","headers":{"Content-Type":"application/x-www-form-urlencoded","X-Client":"keycloak-nodejs-connect","Authorization":"Basic YW(etc...)Z2dP","Content-Length":1498},"method":"POST"}'
It does not appear to get into the callback of that fetch/http wrapper. I added NODE_DEBUG=http to my start up command and was able to find that swallowed error, which appears I am back to the starting line:
HTTP 31: SOCKET ERROR: connect ECONNREFUSED 127.0.0.1:8080 Error: connect ECONNREFUSED 127.0.0.1:8080
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)
I then saw something that I thought may be related due to my docker network set up (Keycloak and Spring Boot web app in dockerized environment) and tried change host name dns so I could use something other then local host but it didn't work either (even added to redirect uri, etc.).
UPDATE #2:
Alright so I got the keycloak.protect() (pure authentication) endpoint working now. I found through reading through the keycloak-connect lib code more options and it seems that adding "realmPublicKey" to the keycloak config object when instantiating keycloak-connect fixed that one. Still no luck yet on the authorization keycloak.enforce side.
const kcConfig: any = {
clientId: 'my-rest-api',
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'my-realm',
realmPublicKey : "MIIBIjANBgk (...etc) uQIDAQAB",
};
So my team finally figured it out - the resolution was a two part process:
Followed the instructions on similar issue stackoverflow question answers such as : https://stackoverflow.com/a/51878212/5117487
Rough steps incase that link is ever broken somehow:
Add hosts entry for 127.0.0.1 keycloak (if 'keycloak' is the name of your docker container for keycloak, I changed my docker-compose to specify container name to make it a little more fool-proof)
Change keycloak-connect config authServerUrl setting to be: 'http://keycloak:8080/auth/' instead of 'http://localhost:8080/auth/'
Postman OAuth 2.0 token request Auth URL and Access Token URL changed to use the now updated hosts entry:
"http://localhost:8080/auth/realms/abra/protocol/openid-connect/auth" -> "http://keycloak:8080/auth/realms/abra/protocol/openid-connect/auth"
"http://localhost:8080/auth/realms/abra/protocol/openid-connect/token" ->
"http://keycloak:8080/auth/realms/abra/protocol/openid-connect/token"

HTTP cookies are not working for localhost subdomains

I have a node/express API that will create an HTTP cookie and pass it down to my React app for authentication. The setup was based on Ben Awad's JWT HTTP Cookie tutorial on Youtube if you're familiar with it. Everything works great when I am running the website on my localhost(localhost:4444). The issue I am now running into is that my app now uses subdomains for handling workspaces(similar to how JIRA or Monday.com uses a subdomain to specify a workspace/team). Whenever I run my app on a subdomain, the HTTP cookies stop working.
I've looked at a lot of threads regarding this issue and can't find a solution, no matter what I try, the cookie will not save to my browser. Here are the current things I have tried so far with no luck:
I've tried specifying the domain on the cookie. Both with a . and without
I've updated my host file to use a domain as a mask for localhost. Something like myapp.com:4444 which points to localhost:4444
I tried some fancy configuration I found where I was able to hide the port as well, so myapp.com pointed to localhost:4444.
I've tried Chrome, Safari, and Firefox
I've made sure there were no CORS issues
I've played around with the security settings of the cookie.
I also set up a ngrok server so there was a published domain to run in the browser
None of these attempts have made a difference so I am a bit lost at what to do at this point. The only other thing I could do is deploy my app to a proper server and just run my development off that but I really really don't want to do that, I should be able to develop from my local machine I would think.
My cookie knowledge is a bit bare so maybe there is something obvious I am missing?
This is what my setup looks like right now:
On the API I have a route(/refresh_token) that will create a new express cookie like so:
export const sendRefreshToken = (res: Response, token: string): void => {
res.cookie('jid', token, {
httpOnly: true,
path: '/refresh_token',
});
};
Then on the frontend it will essentially run this call on load:
fetch('http://localhost:3000/refresh_token', {
credentials: 'include',
method: 'POST'
}).then(async res => {
const { accessToken } = await res.json()
setState({ accessToken, workspaceId })
setLoading(false)
})
It seems super simple to do but everything just stops working when on a subdomain. I am completely lost at this point. If you any ideas, that would be great!
if httpOnly is true, it won't be parsable through client side js. for working with cookies on subdomains, set domain as the main domain (xyz.com)
an eg in BE:
res.cookie('refreshToken', refreshToken, {
domain: authCookieDomain,
path: '/',
sameSite: 'None',
secure: true,
httpOnly: false,
maxAge: cookieRefreshTokenMaxAgeMS
});
and on FE add withCredentials: true as axios options or credentials: include with fetch, and that should work

nodejs keycloak-connect-graphql not working while in docker

I have hit an issue that I have struggled to figure out for the last little while regarding docker. Here is a shortened version of the story
In my development environment (everything is running on localhost), code works great, whenever I add my authorization token in my headers keycloak-connect detects it and works as usual. The problem occurs when I use it in docker and add custom network interfaces.
When the docker container boots up, I have it connect to keycloak via http://keycloak:8080/auth and generate the realm. That works fine, not an issue, thus I know that the keycloak network is up and running as expected. I am able to remote into the container, communicate with keycloak, get a token and etc...
In my graphql application, I use keycloak-connect-graphql to set the context of the graphql application which in turn uses keycloak-connect to set up all the headers. The problem is, that keycloak-connect-graphql tells me that there is no header.If I simply print out the request, I can clearly tell that there is a token being passed in, it's just that for some reason, keycloak-connect-graphql/keycloak-connect does not want to set it because I am using a different network besides localhost.
I was actually able to side-step this problem in production by setting the keycloak url to be a the public url (https://keycloak.DOMAINNAME.com) which makes absolutely no sense to me because http://keycloak:8080/auth does not work. I have digged through keycloak-connect and the keycloak-connect-graphql code and I could not find anything relating to a CORS issue or something else suspicious. If anyone has any ideas please let me know. This bug has been driving me crazy.
Code snippet:
keycloak configuration (app.config)
keycloak: {
realm: process.env.KEYCLOAK_REALM,
'auth-server-url': 'http://keycloak:8080/auth',
'ssl-required': 'none',
resource: process.env.KEYCLOAK_RESOURCE,
'public-client': true,
'use-resource-role-mappings': true,
'confidential-port': 0,
},
configurekeycloak.js
function configureKeycloak(app, graphqlPath) {
const keycloakConfig = require('../config/app.config').keycloak;
const memoryStore = new session.MemoryStore();
app.use(
session({
secret:
process.env.SESSION_SECRET_STRING || 'this should be a long secret',
resave: false,
saveUninitialized: true,
store: memoryStore,
}),
);
const keycloak = new Keycloak(
{
store: memoryStore,
},
keycloakConfig,
);
// Install general keycloak middleware
app.use(
keycloak.middleware({
admin: graphqlPath,
}),
);
// Protect the main route for all graphql services
// Disable unauthenticated access
app.use(graphqlPath, keycloak.middleware());
return { keycloak };
}
index.js
// perform the standard keycloak-connect middleware setup on our app
const { keycloak } = configureKeycloak(app, graphqlPath);
// Ensure entire GraphQL Api can only be accessed by authenticated users
app.use(playgroundPath, keycloak.protect());
const server = new ApolloServer({
gateway,
// uploads: false,
// Apollo Graph Manager (previously known as Apollo Engine)
// When enabled and an `ENGINE_API_KEY` is set in the environment,
// provides metrics, schema management and trace reporting.
engine: false,
// Subscriptions are unsupported but planned for a future Gateway version.
subscriptions: false,
// Disable default playground
playground: true,
context: ({ req }) => {
return {
kauth: new KeycloakContext({ req }),
};
},
// Tracing must be enabled for this plugin to add the headers
tracing: true,
// Register the plugin
plugins: [ServerTimingPlugin],
});
Edit:
setting network_mode to host fixes the issue, but I much prefer using dockers integrated networks rather than using network_mode host, especially for production
Edit 2:
I was able to reproduce the error in my local development environment (without shelling into docker and then debugging from there). I was able to do this buy using the IP address provided by docker. This yields the same results. I have narrowed the issue down to keycloak-connect, which for some reason, does no want to connect to keycloak or something. Still not sure

Not able to set cookie from the express app hosted on Heroku

I have hosted both frontend and backend in Heroku.
Frontend - xxxxxx.herokuapp.com (react app)
Backend - yyyyyy.herokuapp.com (express)
I'm trying to implement Google authentication. After getting the token from Google OAuth2, I'm trying to set the id_token and user details in the cookie through the express app.
Below is the piece of code that I have in the backend,
authRouter.get('/token', async (req, res) => {
try {
const result = await getToken(String(req.query.code))
const { id_token, userId, name, exp } = result;
const cookieConfig = { domain: '.herokuapp.com', expires: new Date(exp * 1000), secure: true }
res.status(201)
.cookie('auth_token', id_token, {
httpOnly: true,
...cookieConfig
})
.cookie('user_id', userId, cookieConfig)
.cookie('user_name', name, cookieConfig)
.send("Login succeeded")
} catch (err) {
res.status(401).send("Login failed");
}
});
It is working perfectly for me on my local but it is not working on heroku.
These are the domains I tried out already - .herokuapp.com herokuapp.com. Also, I tried out without specifying the domain field itself.
I can see the Set-Cookie details on the response headers but the /token endpoint is failing without returning any status code and I can't see the cookies set on the application tab.
Please see the below images,
I can't see any status code here but it says it is failed.
These are cookie information that I can see but it is not available if I check via application tab.
What am I missing here? Could someone help me?
May you should try secure as:
secure: req.secure || req.headers['x-forwarded-proto'] === 'https'
You are right, this should technically work.
Except that if it did work, this could lead to a massive security breach since anyone able to create a Heroku subdomain could generate a session cookie for all other subdomains.
It's not only a security issue for Heroku but also for any other service that lets you have a subdomain.
This is why a list of domains has been created and been maintained since then to list public domains where cookies should not be shared amongst the subdomains. This list is usually used by browsers.
As you can imagine, the domain heroku.com is part of this list.
If you want to know more, this list is known as the Mozilla Foundation’s Public Suffix List.

redirect to another app with session token (jwt) in AngularJS and NodeJS

I have a startup module in angularjs. This module is just to login and have public information (login, prices, newsletter...). I have many roles and for each role, i have an app (angular module). I made this architecture because i have complex module for each role and it was impossible to put all roles in one module.
So, for login, i use jsonwebtoken in node like this :
var token = jwt.sign(user, config.secureToken, { expiresInMinutes: 20*5});
res.json({ token: token, user: user });
It works perfectly. I can login into my app. After that, i have to propose a list of roles to redirect to the right module.
In angular, I have AuthHttp service that adds security headers (with token) to call rest service with $http.
How can i redirect to 'mydomain:port/anotherModule' with $location or $http ?
With this code in nodejs :
app.get('/secondModule', expressJwt({secret: config.secureToken}), function (req, res) {
res.render('restricted/secondModule/index.html');
});
NodeJs sends an html code in response and does'nt redirect...
And if i do this in my angular controller :
location.href = route;
i have this result on nodejs console :
Error: No Authorization header was found
I am not sure about the libraries you are using, but issue seems that you are loosing the token because you navigate to a altogether new page.
Based on your auth library you need to pass the token that you get after auth from one page to another.
The options here are to either use browser sessionStorage or querystring to pass the token along and at it back to the http header collection on the new page (module)
This is an old post but I recently took a long time to figure this out. I may be wrong but I believe nodeJS/expressJS can't read the token from the session storage. I believe you will need to pass the token via the request header using AngularJS.
This depends on the front end that you are using. For me, I am using AngularJS and I have to do something like this.
angular.module('AngularApp').factory('authFactory',
function($window){ //the window object will be able to access the token
var auth = {};
auth.saveToken = function(token){
$window.localStorage['token_name'] = token; //saving the token
}
auth.getToken = function(){
return $window.localStorage['token_name']; //retrieving the token
}
return auth;
}
.service('authInterceptor, function(authFactory){
return { headers: {Authorization: 'Bearer "+ authFactory.getToken()}
} //the last line gets the retrieved token and put it in req.header
Then, you just need to include 'authInterceptor' in all the http methods when you communicate with the backend. This way, nodeJS will be able to pick up the token.
You can see the Authorization field in req.header if you use the chrome developer tool and look at the Network tab. Hope this helps.

Resources