My JHipster(v4.11.1) project is using Monolithic architecture with OAuth 2.0 authentication type. I did face couple of issue for hosting it in a production server. Currently here are not fixed yet ones:
After I do click 'Sign in' link I get redirect into following url:
https://my-domain-name/auth/realms/jhipster/protocol/openid-connect/auth?client_id=web_app&redirect_uri=http://my-docker-service-name/login&response_type=code&scope=openid%20profile%20email&state=hO2NCQ
First issue is, here I need my-docker-service-name to be my-domain-name (and using https).
Note: Here I do see keycloak login page with following error message: Invalid parameter: redirect_uri
If I do change redirect_uri into my domain name manually, then I see keycloak login page without error. Next issue is after I do enter username/password, I get redirect into following url:
http://keycloak/auth/realms/jhipster/login-actions/authenticate?code=3MADiKg19-SL1L_lOmMEJv4w3kmGlF--0hyIDInKPm8&execution=07cacbc6-5b72-407e-9a0c-9a1b6447a7ff&client_id=web_app
And as you can see my second issue is keycloak need to be my-domain-name (and using https).
Note: Here if change the url manually into my-domain-name, then I see login page with invalid username/password error message.
Moreover I do have same problem for accessing keycloak administration console(it get redirect into http://keycloak) and I can't see the login page(Invalid parameter: redirect_uri).
I can provide more information regards to my production configurations if needed? for instance I do use Nginx as reverse proxy and also for handling https requests. My Nginx instance is a docker container and using default docker network for finding it's upstreams (keycloak(for /auth path) and my-app(for / path).
Even I did face above issues, so far I am very happy with the result and I would like to Thank you JHipster team, Keycloak team and Matt Raible! :-) for making it possible for us to use this great frameworks together! Cheers!
First of all this section(8.3. Setting Up a Load Balancer or Proxy) of Keycloak documentation was absolutely helpful. I was able to make things somehow working but I still feel things can get done better and in more secure way!
I am not going to repeat needed configuration for Keycloak side, but I rather provide some hints for you in case you are using Nginx as reverse proxy.
Here is my nginx.conf which includes required configs:
```
upstream rock-app {
server rock-app:8080;
}
upstream keycloak {
server keycloak:9080;
}
server {
listen 80;
listen 443 ssl http2;
...
add_header Strict-Transport-Security "max-age=86400; includeSubdomains; preload" always;
...
}
location / {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://rock-app;
}
location /auth {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://keycloak;
}
```
Note: add_header Strict-Transport-Security ... part is important and ensure us users will stay in https protocol when they enter the site with a https url. please correct me if I am wrong!?
Now If I visit following url:
https://my-example.com/auth/realms/jhipster/.well-known/openid-configuration
I will see this response:
{
"issuer": "http://my-example.com/auth/realms/jhipster",
"authorization_endpoint": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/auth",
"token_endpoint": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/token",
"token_introspection_endpoint": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/token/introspect",
"userinfo_endpoint": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/userinfo",
"end_session_endpoint": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/logout",
"jwks_uri": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/certs",
"check_session_iframe": "http://my-example.com/auth/realms/jhipster/protocol/openid-connect/login-status-iframe.html",
"grant_types_supported": ["authorization_code", "implicit", "refresh_token", "password", "client_credentials"],
"response_types_supported": ["code", "none", "id_token", "token", "id_token token", "code id_token", "code token", "code id_token token"],
"subject_types_supported": ["public", "pairwise"],
"id_token_signing_alg_values_supported": ["RS256"],
"userinfo_signing_alg_values_supported": ["RS256"],
"request_object_signing_alg_values_supported": ["none", "RS256"],
"response_modes_supported": ["query", "fragment", "form_post"],
"registration_endpoint": "http://my-example.com/auth/realms/jhipster/clients-registrations/openid-connect",
"token_endpoint_auth_methods_supported": ["private_key_jwt", "client_secret_basic", "client_secret_post"],
"token_endpoint_auth_signing_alg_values_supported": ["RS256"],
"claims_supported": ["sub", "iss", "auth_time", "name", "given_name", "family_name", "preferred_username", "email"],
"claim_types_supported": ["normal"],
"claims_parameter_supported": false,
"scopes_supported": ["openid", "offline_access"],
"request_parameter_supported": true,
"request_uri_parameter_supported": true
}
As you may notice http://my-example.com/... is shown instead of https://my-example.com/...
Therefore I had to change following config of my realm(jhipster-realm.json) from
"sslRequired" : "external",
to
"sslRequired" : "none", which I don't know if is a bad thing? considering (1) my browser never leaves https when I do test login workflow and (2) my keycloak instance is not accessible through any public port.
Well I am not going to accept my own answer as accepted answer, because as I said earlier I feel things can get done better and in more secure way. Thanks!
Update
I've done following changes for using https protocol:
Dockerfile
FROM jboss/keycloak:3.4.1.Final
standalone.xml
<server name="default-server">
...
<http-listener name="default" socket-binding="http" redirect-socket="proxy-https" proxy-address-forwarding="${env.PROXY_ADDRESS_FORWARDING}" certificate-forwarding="true" enable-http2="true"/>
<https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" proxy-address-forwarding="${env.PROXY_ADDRESS_FORWARDING}" certificate-forwarding="true" enable-http2="true"/>
...
<socket-binding-group ...
<socket-binding name="proxy-https" port="443"/>
...
nginx.conf
upstream keycloak {
server keycloak:9443;
}
...
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
...
location /auth {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass https://keycloak;
}
...
jhipster-realm.json
...
"sslRequired": "external",
HTH, Thanks!
Related
I will skip the usual rant on time spent, frustration, MS is stupid, etc.
I have tried to be as complete as possible
We have 5 Azure App Services, 3 aspdotnet core 5.0 and 2 Blazor Server apps we are using Azure AD B2B.
We had the first two or three working on Front Door and then discovered it does not support SignalR (websockets). Wait, I promised not to rant.
We switched to NGINX.
Below is the basic configuration (all https). It is verbose and I checked each one as I wrote this, hoping to find an error.
app1.azurewebsites.net
app2.azurewebsites.net
app3.azurewebsites.net
app4.azurewebsites.net
app5.azurewebsites.net
We need it to work like this
domain.com/ - app1
domain.com/app2
domain.com/app3
domain.com/app4
domain.com/app5
The Redirect URIs in AD, the application configuration overrides, and appsettings.config are set to
domain.com/signin-oidc
domain.com/app2/signin-oidc
domain.com/app3/signin-oidc
domain.com/app4/signin-oidc
domain.com/app5/signin-oidc
My current NGNIX config is
server_name domain.com
listen 80;
listen [::]:80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mycert.cert;
ssl_certificate_key /etc/nginx/ssl/mycert.prv;
location /app2 {
proxy_pass https://app2.azurewebsites.net/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header X-Real-Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass https://app1.azurewebsites.net/;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The app service all have this code.
public void ConfigureServices(IServiceCollection services)
{
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto |
ForwardedHeaders.XForwardedHost;
});```
When I try domain.com in the browser I get this error
**AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: '{ClientId Guid}'.**
When I inspect the request it looks like this
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id={ClientId Guid}&redirect_uri=https://app1.azurewebsites.net/signin-oidc ...
This is true for all of the apps
I am at a loss as how to solve this. MS support was no help even in a made up non-NGINX scenario.
I hired a NGINX "expert" who got nowhere.
I have a call scheduled EOW with OKTA, who "believe" they have a solution.
None of this is optimal and has wrecked hours of CI/CD work.
Has anyone made this work? If so how?
TIA
G
I believe this is a workaround but have you tried to change the AD's Redirect URIs to *.azurewebsites.net instead of domain.com/appN/signin-oidc?
I have my backend Nodejs running on port 23456
I have my frontend VUE running on port 8080
When i start them and when i visit my domain eg. test.dev the frontend is visible however I'm not able to login.. and it feels like it's not triggering to DB at all.
The backend is starting fine, the fronend is starting fine, it just feels that they don't talk to each other since they are different ports.
For days I have been reading about this and this seems to be a CORS issue and i have tried to find right configt but since I'm noob in this nothing works.
I'm currently running NGINX and this is how my file looks like right now (etc/nginx/sites-available/test.dev.conf:
server {
listen 80;
server_name test.dev www.test.dev;
return 301 https://test.dev$request_uri;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name test.dev www.test.dev;
# Use the Let's Encrypt certificates
ssl_certificate /etc/letsencrypt/live/test.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.dev/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:23456;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
location / {
add_header 'Access-Control-Allow-Origin' 'http://localhost:8080' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE' always;
add_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin' always;
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I can visit test.dev and I can see the frontend, however clicking on login doesn't work.
It feels like it's not connecting at all to the backend which is another port and I don't know how to get this to work..
This is the error in the console I'm getting:
Any idea how my .conf file should look like?
Thanks in advance.
I have managed to find the solution somehow.
So what i want to share is that the code above posted is correct.
Now, the solution was that I changed in the frontend VUE so that API was read from:
http://localhost:23456/api/v1
to
https://test.dev/api/v1
That's it. The login worked just find and everything works fine...
Thanks for all help!
I have make my nodejs app, hosted it on digital ocean server connect it to domain name, and all works fine, but when i'm trying to put ssl certificate (using https module instead of http), it doesn't works. Here is a code:
var sslopt = {
key : fs.readFileSync('./ssl/server.key'),
cert : fs.readFileSync('./ssl/server.crt'),
ca : [fs.readFileSync('./ssl/ca1.crt'), fs.readFileSync('./ssl/ca2.crt')]
};
var server = https.createServer(sslopt,function(req,res){
...
});
server.listen(8001,function(err){
...
});
My nodejs app running fine but if i'm trying to access it, I just see the 502 Bad Gateway error, and no requests was sent to my nodejs app. When I have opened my nginx error log I see the errors
-date- -time- [error] 18116#18116: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: -ip-, server: -server-, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: -host-
But the most strange thing that if I'm trying to get access with https protocol and port 8001 (https://{domainname}.com:8001) I can see my app working fine, but connection is not secured.
I just can't understand what I'm doing wrong...
P.S.
my nginx config file
server {
listen *:443;
listen *:80;
server_name {myhostname};
access_log /var/log/nginx/qt.access.log;
error_log /var/log/nginx/qt.error.log;
root /srv/qt;
index index.html index.htm index.php;
# Headers to pass to proxy server.
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
proxy_http_version 1.1;
proxy_redirect off;
# Go to next upstream after if server down.
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_connect_timeout 5s;
# Gateway timeout.
proxy_read_timeout 20s;
proxy_send_timeout 20s;
# Buffer settings.
proxy_buffers 8 32k;
proxy_buffer_size 64k;
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Recognizing that this is an old post, hopefully this will still help someone out. I just figured out what seems to be the same thing and it was proxy server related.
My NodeJS server worked fine without SSL but gave the 502 without ever reaching the server when I used https.
// =============================================================================
// START THE SERVER
var port = process.env.PORT || 8080; // set our port
https.createServer({
key: fs.readFileSync('./certs/private.key'),
cert: fs.readFileSync('./certs/cert.crt')
}, app).listen(port);
//app.listen(port);
console.log('API Active on port ' + port);
My client and server are behind a pfSense firewall using a Squid3 proxy server.
pfSense controls the DNS for api.mydomain.com and routes the traffic to my dev server when I make a call to api.mydomain.com:8080/myroute. I have a valid SSL/TLS cert for api.mydomain.com
With the app.listen(port) line uncommented everything worked great.
With the https.createServer(...) uncommented I got the 502 error and traffic never reached the server.
To fix:
In pfSense click Services -> Squid Proxy Server
Click on the ACLs
configuration page
At the bottom of the page find the section
called Squid Allowed Ports.
In the field For ACL SSL Ports, enter the port your
application is using (in my case 8080).
For me, rainbows appeared and bluebirds did sing.
I am trying to setup nginx as a reverse rpoxy server in front off several IIS web servers who are authenticating using Basic authentication.
(note - this is not the same as nginx providing the auth using a password file - it should just be marshelling everythnig between the browser/server)
Its working kind off - but getting repeatedly prompted for auth by every single resource (image/css etc) on a page.
upstream my_iis_server {
server 192.168.1.10;
}
server {
listen 1.1.1.1:80;
server_name www.example.com;
## send request back to my iis server ##
location / {
proxy_pass http://my_iis_server;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass_header Authorization;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This exact situation took me forever to figure out, but OSS is like that I guess. This post is a year old so maybe the original poster figured it out, or gave up?
Anyway, the problem for me at least was caused by a few things:
IIS expects the realm string to be the same as what it sent to Nginx, but if your Nginx server_name is listening on a different address than the upstream then the server side WWW-Authenticate is not going to be what IIS was expecting and ignore it.
The builtin header module doesn't clear the other WWW-Authenticate headers, particularly the problematic WWW-Authenticate: Negotiate. Using the headers-more module clears the old headers, and adds whatever you tell it to.
After this, I was able to finally push Sharepoint 2010 through Nginx.
Thanks stackoverflow.
server {
listen 80;
server_name your.site.com;
location / {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_pass_header Authorization; //This didnt work for me
more_set_input_headers 'Authorization: $http_authorization';
proxy_set_header Accept-Encoding "";
proxy_pass https://sharepoint/;
proxy_redirect default;
#This is what worked for me, but you need the headers-more mod
more_set_headers -s 401 'WWW-Authenticate: Basic realm="intranet.example.com"';
}
}
I had these same symptoms with nginx/1.10.3. I have a service secured under basic authentication, and nginx as a reverse proxy between the clients and the server. The requirement was that nginx would passthrough the authorization.
First request to the server did pass through the Authorization header. Second request simply blocked this header, which meant the client was only able to make one request per session.
This was somehow related to cookies. If I cleared the browser cookies, then the cycle repeated. The client was able to authenticate but just for the first request. Closing the browser had the same effect.
The solution for me was to change the upstream server from https to http, using:
proxy_pass http://$upstream;
instead of:
proxy_pass https://$upstream;
I'm using NODE.js behind NGINX server, this is my Nginx configuration:
upstream example.it {
server 127.0.0.1:8000;
}
server {
server_name www.example.it;
location / {
proxy_pass http://example.it;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
All works good, the requests are correctly sent from nginx to node BUT I saw a problem in the log file generated from Express.js.
The problem is that ALL THE REQUESTS are saved as done from 127.0.0.1, why?
I don't seen any remove hosts (the real ip of who made the request).
Thanks
Assuming you're using Express 3.0, a better way to do this is through the trust proxy setting.
From the Express documentation:
trust proxy Enables reverse proxy support, disabled by default
In order to use it:
app.set('trust proxy', true);
app.use(express.logger('default'));
This has the added advantage of working correctly when you're using a proxy as when you're not (for example in a development environment).
That's correct, as nginx will be the remote host. You need to specify a custom log format to log the X-Forwarded-For header, see the connect logger documentation.
app.use(express.logger(':req[X-Forwarded-For] - - [:date] ":method :url HTTP/:http-version" :status :res[content-length] ":referrer" ":user-agent"'));