nodejs keycloak-connect-graphql not working while in docker - node.js

I have hit an issue that I have struggled to figure out for the last little while regarding docker. Here is a shortened version of the story
In my development environment (everything is running on localhost), code works great, whenever I add my authorization token in my headers keycloak-connect detects it and works as usual. The problem occurs when I use it in docker and add custom network interfaces.
When the docker container boots up, I have it connect to keycloak via http://keycloak:8080/auth and generate the realm. That works fine, not an issue, thus I know that the keycloak network is up and running as expected. I am able to remote into the container, communicate with keycloak, get a token and etc...
In my graphql application, I use keycloak-connect-graphql to set the context of the graphql application which in turn uses keycloak-connect to set up all the headers. The problem is, that keycloak-connect-graphql tells me that there is no header.If I simply print out the request, I can clearly tell that there is a token being passed in, it's just that for some reason, keycloak-connect-graphql/keycloak-connect does not want to set it because I am using a different network besides localhost.
I was actually able to side-step this problem in production by setting the keycloak url to be a the public url (https://keycloak.DOMAINNAME.com) which makes absolutely no sense to me because http://keycloak:8080/auth does not work. I have digged through keycloak-connect and the keycloak-connect-graphql code and I could not find anything relating to a CORS issue or something else suspicious. If anyone has any ideas please let me know. This bug has been driving me crazy.
Code snippet:
keycloak configuration (app.config)
keycloak: {
realm: process.env.KEYCLOAK_REALM,
'auth-server-url': 'http://keycloak:8080/auth',
'ssl-required': 'none',
resource: process.env.KEYCLOAK_RESOURCE,
'public-client': true,
'use-resource-role-mappings': true,
'confidential-port': 0,
},
configurekeycloak.js
function configureKeycloak(app, graphqlPath) {
const keycloakConfig = require('../config/app.config').keycloak;
const memoryStore = new session.MemoryStore();
app.use(
session({
secret:
process.env.SESSION_SECRET_STRING || 'this should be a long secret',
resave: false,
saveUninitialized: true,
store: memoryStore,
}),
);
const keycloak = new Keycloak(
{
store: memoryStore,
},
keycloakConfig,
);
// Install general keycloak middleware
app.use(
keycloak.middleware({
admin: graphqlPath,
}),
);
// Protect the main route for all graphql services
// Disable unauthenticated access
app.use(graphqlPath, keycloak.middleware());
return { keycloak };
}
index.js
// perform the standard keycloak-connect middleware setup on our app
const { keycloak } = configureKeycloak(app, graphqlPath);
// Ensure entire GraphQL Api can only be accessed by authenticated users
app.use(playgroundPath, keycloak.protect());
const server = new ApolloServer({
gateway,
// uploads: false,
// Apollo Graph Manager (previously known as Apollo Engine)
// When enabled and an `ENGINE_API_KEY` is set in the environment,
// provides metrics, schema management and trace reporting.
engine: false,
// Subscriptions are unsupported but planned for a future Gateway version.
subscriptions: false,
// Disable default playground
playground: true,
context: ({ req }) => {
return {
kauth: new KeycloakContext({ req }),
};
},
// Tracing must be enabled for this plugin to add the headers
tracing: true,
// Register the plugin
plugins: [ServerTimingPlugin],
});
Edit:
setting network_mode to host fixes the issue, but I much prefer using dockers integrated networks rather than using network_mode host, especially for production
Edit 2:
I was able to reproduce the error in my local development environment (without shelling into docker and then debugging from there). I was able to do this buy using the IP address provided by docker. This yields the same results. I have narrowed the issue down to keycloak-connect, which for some reason, does no want to connect to keycloak or something. Still not sure

Related

Is it impossible to set a domain of localhost cookie remotely from a backend?

I've already checked out these two SO questions:
Can I use localhost as the domain when setting an HTTP cookie?
Setting a cookie from a remote domain for local development
But I don't want to edit my HOSTS file and setting a wildcard domain doesn't help me.
I've used node.js, but it should be programming language agnostic...
So my problem is the following:
I wanna work on my Angular frontend on https://localhost:4200 (and possibly http://localhost:4200) and reach my backend by getting access to it. Obviously I have to implement CORS rules for that, hence I've implemented the following CORS rules in the Node.js backend:
const allowedOrigins = environment.header;
const origin = req.headers.origin;
if (allowedOrigins.includes(origin)) {
res.setHeader('Access-Control-Allow-Origin', origin);
}
where allowedOrigins is an array that contains the following:
environment.header = ['https://localhost:4200', 'http://localhost:4200', 'http://test.example.org', 'https://test.example.org'];
The problem at hand is that when I go to work on my Angular frontend locally it does not send the cookie to the backend for some reason (maybe this kind of connection is simply not allowed by some RFC???), hence my JWT checking mechanism throws 403 Forbidden after logging in instantly.
My JWT check function looks like this:
if (req.headers.origin === 'https://localhost:4200' || 'http://localhost:4200')
orig = 'localhost';
else
orig = req.headers.origin;
res.cookie(
'access_token', 'Bearer ' + token, {
//domain: 'localhost',
domain: orig,
path: '/',
expires: new Date(Date.now() + 900000), // cookie will be removed after 15 mins
httpOnly: true // in production also add secure: true
})
I need to do this to work on my Angular frontend locally and the backend has a connection to another server, which works only locally for now...
withCredentials is of course true (so the JWT cookie is being sent with), so that's not the problem in my codebase.
UPDATE
Ok so I've figured out that req.headers.origin is usually undefined..
UPDATE 2
Changed req.headers.origin to req.headers.host, but still it doesn't work
I needed to add the following properties to the res.cookie for it to work:
sameSite: 'none', secure: true
Then I enabled third-party cookies in Incognito mode and it worked for me.
Case closed.

createProxyMiddleware cannot connect http localhost to https remote gateway in react application (CRA)

My Front end is react Application.
I have gateway server which has been recently migrated to tomcat from spring boot, with secured protocol(https). This Gateway server is responsible to authenticate me through Oauth and genrate a Token and created session for the domain.
Before making https, the connection of my react application was working seamlessly fine, but after the change, I always get the error ERR_SSL_PROTOCOL_ERROR, with my localhost hitting the secured gateway URL.
to connect the local instance of React to the remote servers through Gateway, I have used createProxyMiddleware package as:
module.exports = (app) => {
app.use(/^\/(?!static).+/, createProxyMiddleware({
target: target.url,
changeOrigin: false,
secure: false,
toProxy: true,
onProxyRes: (proxyResponse) => {
if (proxyResponse.headers['set-cookie']) {
const cookies = proxyResponse.headers['set-cookie'].map(cookie =>
cookie.replace(/; secure/gi, '')
);
proxyResponse.headers['set-cookie'] = cookies;
}
}
}));
};
In a file called setupProxy.js as mentioned in https://create-react-app.dev/docs/proxying-api-requests-in-development/ as Setting Custom Proxy as I am dynamically setting the target URL as per environment variable been set on multiple npm run start variants as npm run start-server1, npm run start-server2 etc.
Now whenever I am using the respective start command to trigger the connection with secured gateway url, I am not server with required information, rather displayed with error as shown above.
I also looked out for the solution to find http to https connection, where eople have suggested for changeOrigin, set https=true in packagejson, use certificate created in local machine with pfx: fs.readFileSync(`${__dirname}/../all.p12`), under target option. but nothing seems to be working.
Please suggest how can this be achieved using createProxyMiddleware library

Confidential Rest-Api w/ Permissions - Always 403s - What Am I Doing Wrong?

I've tried for many hours now and seem to have hit a wall. Any advice/help would be appreciated.
Goal: I want to authorize the express rest-api (ex client-id: "my-rest-api") routes (example resource: "WeatherForecast") across various HTTP methods mapped to client scopes (examples: "create"/"read"/"update"/"delete"). I want to control those permissions through policies (For example - "Read - WeatherForecast - Permission" will be granted if policy "Admin Group Only" (user belongs to admin group) is satisfied.
Rest-api will not log users in (will be done from front end talking directly to keycloak and then they will use that token to talk with rest-api).
Environment:
Keycloak 15.1.1 running in its own container, port 8080, on docker locally (w/ shared network with rest-api)
"my-rest-api": Nodejs 16.14.x w/ express 4.17.x server running on its own container on docker locally. Using keycloak-connect 15.1.1 and express-session 1.17.2.
Currently hitting "my-rest-api" through postman following this guide: https://keepgrowing.in/tools/kecloak-in-docker-7-how-to-authorize-requests-via-postman/
What Happens: I can login from keycloak login page through postman and get an access token. However when I hit any endpoint that uses keycloak.protect() or keycloak.enforce() (with or without specifying resource permissions) I can't get through. In the following code the delete endpoint returns back 200 + the HTML of the keycloak login page in postman and the Get returns back 403 + "Access Denied".
Current State of Realm
Test User (who I login with in Postman) has group "Admin".
Client "my-rest-api" with access-type: Confidential with Authorization enabled.
Authorization set up:
Policy Enforcement Mode: Enforcing, Decision Strategy: Unanimous
"WeatherForecast" resource with uri "/api/WeatherForecast" and create/read/update/delete client scopes applied.
"Only Admins Policy" for anyone in group admin. Logic positive.
Permission for each of the client scopes for "WeatherForecast" resource with "Only Admins Policy" selected, Decision Strategy: "Affirmative".
Current State of Nodejs Code:
import express from 'express';
import bodyParser from 'body-parser';
import session from "express-session";
import KeycloakConnect from 'keycloak-connect';
const app = express();
app.use(bodyParser.json());
const memoryStore = new session.MemoryStore();
app.use(session({
secret: 'some secret',
resave: false,
saveUninitialized: true,
store: memoryStore
}));
const kcConfig: any = {
clientId: 'my-rest-api',
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'my-realm',
};
const keycloak = new KeycloakConnect({ store: memoryStore }, kcConfig);
app.use(keycloak.middleware({
logout: '/logout',
admin: '/',
}));
app.get('/api/WeatherForecast', keycloak.enforcer(['WeatherForecast:read'],{ resource_server_id: "my-rest-api"}), function (req, res) {
res.json("GET worked")
});
app.delete('/api/WeatherForecast', keycloak.protect(), function (req, res) {
res.json("DELETE worked")
});
app.listen(8081, () => {
console.log(`server running on port 8081`);
});
A Few Other Things Tried:
I tried calling RPT endpoint with curl using token gotten from postman and got the RPT token perfectly fine, saw permissions as expected.
I tried calling keycloak.checkPermissions({permissions: [{id: "WeatherForecast", scopes: ["read"]}]}, req).then(grant => res.json(grant.access_token)); from inside an unsecured endpoint and got "Connection refused 127.0.0.1:8080".
I tried just disabling Policy Enforcement Mode just to see, still got Access Denied/403.
I tried using keycloak.json config instead of object method above - same exact results either way.
I tried openid-client (from another tutorial) and also got connected refused issues.
I've tried using docker host ip, host.docker.internal, the container name, etc. to no avail (even though I don't think it is an issue as I obviously can hit the auth service and get the first access token).
I really want to use Keycloak and I feel like my team is so close to being able to do so but need some assistance getting past this part. Thank you!
------------------- END ORIGINAL QUESTION ------------------------
EDIT/UPDATE #1:
Alright so a couple more hours sank into this. Decided to read through every line of keycloak-connect library that it hits and debug as it goes. Found it fails inside keycloak-connect/middleware/auth-utils/grant-manager.js on the last line of checkPermissions. No error is displayed or catch block to debug on - chasing the rabbit hole down further I was able to find it occurs in the fetch method that uses http with options:
'{"protocol":"http:","slashes":true,"auth":null,"host":"localhost:8080","port":"8080","hostname":"localhost","hash":null,"search":null,"query":null,"pathname":"/auth/realms/my-realm/protocol/openid-connect/token","path":"/auth/realms/my-realm/protocol/openid-connect/token","href":"http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/token","headers":{"Content-Type":"application/x-www-form-urlencoded","X-Client":"keycloak-nodejs-connect","Authorization":"Basic YW(etc...)Z2dP","Content-Length":1498},"method":"POST"}'
It does not appear to get into the callback of that fetch/http wrapper. I added NODE_DEBUG=http to my start up command and was able to find that swallowed error, which appears I am back to the starting line:
HTTP 31: SOCKET ERROR: connect ECONNREFUSED 127.0.0.1:8080 Error: connect ECONNREFUSED 127.0.0.1:8080
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)
I then saw something that I thought may be related due to my docker network set up (Keycloak and Spring Boot web app in dockerized environment) and tried change host name dns so I could use something other then local host but it didn't work either (even added to redirect uri, etc.).
UPDATE #2:
Alright so I got the keycloak.protect() (pure authentication) endpoint working now. I found through reading through the keycloak-connect lib code more options and it seems that adding "realmPublicKey" to the keycloak config object when instantiating keycloak-connect fixed that one. Still no luck yet on the authorization keycloak.enforce side.
const kcConfig: any = {
clientId: 'my-rest-api',
bearerOnly: true,
serverUrl: 'http://localhost:8080/auth',
realm: 'my-realm',
realmPublicKey : "MIIBIjANBgk (...etc) uQIDAQAB",
};
So my team finally figured it out - the resolution was a two part process:
Followed the instructions on similar issue stackoverflow question answers such as : https://stackoverflow.com/a/51878212/5117487
Rough steps incase that link is ever broken somehow:
Add hosts entry for 127.0.0.1 keycloak (if 'keycloak' is the name of your docker container for keycloak, I changed my docker-compose to specify container name to make it a little more fool-proof)
Change keycloak-connect config authServerUrl setting to be: 'http://keycloak:8080/auth/' instead of 'http://localhost:8080/auth/'
Postman OAuth 2.0 token request Auth URL and Access Token URL changed to use the now updated hosts entry:
"http://localhost:8080/auth/realms/abra/protocol/openid-connect/auth" -> "http://keycloak:8080/auth/realms/abra/protocol/openid-connect/auth"
"http://localhost:8080/auth/realms/abra/protocol/openid-connect/token" ->
"http://keycloak:8080/auth/realms/abra/protocol/openid-connect/token"

HTTP cookies are not working for localhost subdomains

I have a node/express API that will create an HTTP cookie and pass it down to my React app for authentication. The setup was based on Ben Awad's JWT HTTP Cookie tutorial on Youtube if you're familiar with it. Everything works great when I am running the website on my localhost(localhost:4444). The issue I am now running into is that my app now uses subdomains for handling workspaces(similar to how JIRA or Monday.com uses a subdomain to specify a workspace/team). Whenever I run my app on a subdomain, the HTTP cookies stop working.
I've looked at a lot of threads regarding this issue and can't find a solution, no matter what I try, the cookie will not save to my browser. Here are the current things I have tried so far with no luck:
I've tried specifying the domain on the cookie. Both with a . and without
I've updated my host file to use a domain as a mask for localhost. Something like myapp.com:4444 which points to localhost:4444
I tried some fancy configuration I found where I was able to hide the port as well, so myapp.com pointed to localhost:4444.
I've tried Chrome, Safari, and Firefox
I've made sure there were no CORS issues
I've played around with the security settings of the cookie.
I also set up a ngrok server so there was a published domain to run in the browser
None of these attempts have made a difference so I am a bit lost at what to do at this point. The only other thing I could do is deploy my app to a proper server and just run my development off that but I really really don't want to do that, I should be able to develop from my local machine I would think.
My cookie knowledge is a bit bare so maybe there is something obvious I am missing?
This is what my setup looks like right now:
On the API I have a route(/refresh_token) that will create a new express cookie like so:
export const sendRefreshToken = (res: Response, token: string): void => {
res.cookie('jid', token, {
httpOnly: true,
path: '/refresh_token',
});
};
Then on the frontend it will essentially run this call on load:
fetch('http://localhost:3000/refresh_token', {
credentials: 'include',
method: 'POST'
}).then(async res => {
const { accessToken } = await res.json()
setState({ accessToken, workspaceId })
setLoading(false)
})
It seems super simple to do but everything just stops working when on a subdomain. I am completely lost at this point. If you any ideas, that would be great!
if httpOnly is true, it won't be parsable through client side js. for working with cookies on subdomains, set domain as the main domain (xyz.com)
an eg in BE:
res.cookie('refreshToken', refreshToken, {
domain: authCookieDomain,
path: '/',
sameSite: 'None',
secure: true,
httpOnly: false,
maxAge: cookieRefreshTokenMaxAgeMS
});
and on FE add withCredentials: true as axios options or credentials: include with fetch, and that should work

Keycloak always redirecting to login page

I´m using a keycloak instance to login in the frontend and secure the backend-api. After deployment on a linux machine on aws I faced a issue. I´m getting constantly redirected to the login page by accessing the api with a jwt token. Locally it´s working fine.
My client is a confidential client. I´m using client_id and _secret to authorize for the token call. The jwt token is valid and sucessfully generated.
My implementation of the api works with expressJs and the keycloak-nodejs-connector:
keycloakConfig = {
serverUrl: 'https://keycloak.myserver.com/auth',
realm: 'examplerealm',
clientId: 'ui-client'
};
public init() {
if (this.keycloak) {
console.warn("Trying to init Keycloak again!");
return this.keycloak;
}
else {
console.log("Initializing Keycloak...");
const memoryStore = new session.MemoryStore();
// #ts-ignore
this.keycloak = new Keycloak({ store: memoryStore }, this.keycloakConfig );
return this.keycloak;
}
}
I could imagine that it is dependent on the current https setting. My nodejs api provides a endpoint for http and https (locally with a self signed certificate). On the server, where keycloak is running, I added a letsencrypt certificate with certbot and everything looks fine in the browser.
Keycloak is started with the docker-container jboss/keycloak.
I´m curious to figure out my current issue and help is very appreciated :slight_smile: Let me know, if I missed to add necessary informations.
Thanks in advance.
Dominik
I found a solution for this.
First I updated to the latest version of keycloak-connect. They provided a new major version 12 and it seems there was a change about the configuration.
Second there was a issue with the configuration. I digged into the current config object and figured out, that it should look like this:
keycloakConfig = {
realm: 'test-realm',
authServerUrl: 'https://myurl/auth/',
realmPublicKey: 'key'
};

Resources