I am trying to get IdentityServer 4 running in a two-node ARR setup. I have other two-node Web applications configured but IdentityServer doesn't want to play nice. The servers are setup for HTTPS only. When I had it in a single site everything was fine, and all requests were https://... But in the ARR setup the requests start like:
https://identityserver.local/.well-known/openid-configuration
http:/identityserver.local/connect/authorize?client_id=....
The second request results in a 404. When I have it as a regular single site, that second request is:
https:/identityserver.local/connect/authorize?client_id=....
Why is it http instead of https when running with ARR?
The solution for this one 2-step:
First I fixed the Forwarded headers:
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedProto;
});
Next, configure data protection so that the encryption keys are shared by different instances of the app.
services.AddDataProtection()
.SetApplicationName("MyAspNetCoreSample")
.PersistKeysToFileSystem(new DirectoryInfo(#"path\to\shared\folder"));
Hope this helps someone.
Related
I have an ASP.NET Core 3.1 web app that I run on my local development machine. This app successfully runs. I can also successfully execute requests to it via Postman. I'm trying to run a test from a Node.js app. This app is using Axios to try to load one of the web pages. The request looks like this:
const result = await axios.get('https://localhost:5001/');
When this request runs, I receive the following error:
Error: unable to verify the first certificate
...
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
...
The fact that I can a) load the url in my browser and b) run the request from Postman leads me to believe there is a config issue with my Node app. I don't know if it's an issue with a) my axios request or b) some app configuration. Oddly, I receive the same error if I try to execute my request against http://localhost:5000/ (i.e. not over HTTPS).
I'm unsure how to resolve this issue though. How do I execute a request via Axios against a web app running on localhost?
You'll need to tell axios/node what signing authorities to trust (your browser and postman will already have several of those set up)
You do that by configuring the https agent in axios - have a look at this answer for an example : How to configure axios to use SSL certificate?
And here are instructions on how to get the bundle from the browser (you'll probably need to use a p7b/pfx or get all certs in the chain): https://medium.com/#menakajain/export-download-ssl-certificate-from-server-site-url-bcfc41ea46a2
I have an Azure Web App running on linux and i want to set up a rewrite rule for my application. When users visit example.com, i will redirect them to www.example.com.
In windows, it is easier to use web.config but how is it done in Linux ?
On the linux, there is no Apache or Nginx installed but i am wondering how the application is running.
How can i get this done ? Set up a rewrite rule on either Apache or Nginx to affect my Azure Web App running on linux ?
In theory, your needs can be solved in two ways.
For more details, you can read this blogs.
And also can download sample code(ASP.NET Core URL Rewriting Sample)to test it.
The first method:
Re-Writing a URL
Here's how to handle a Rewrite operation in app.Use() middleware:
app.Use(async (context,next) =>
{
var url = context.Request.Path.Value;
// Redirect to an external URL
if (url.Contains("/home/privacy")) // you can use equal
{
context.Response.Redirect("https://markdownmonster.west-wind.com")
//return; // short circuit
}
await next();
});
The second method:
The ASP.NET Core Rewrite Middleware Module
The above two methods are officially provided. The actual test needs to be deployed to Azure for testing. If you have any questions, you can also raise a support ticket for help.
I'm creating a fullstack web-app with an API as backend, and I'm hosting it on a DigitalOcean server.
The front-end (reactjs) is running on a port (3000) and the backend (express server -RESTFul API-) on another (3001).
I would like to be able to communicate with both of them from a single domain.
Ex :
https://example.com/ => redirect to the front-end
https://example.com/a-specific-page => redirect to a specific page of the front-end
https://api.example.com/ => redirect to the backend API
https://api.example.com/login => redirect to the login part of the API
How can I do this ?
I've already tried some things :
redirect subdomain from my provider (ovh.com) => this is not the way
create a third nodejs server at the root on port 80 and redirect manually, but I don't think it's a good way because I have to consider all possibilities of domain name (www.mydomain.com / mydomain.com / http:/ etc...) and used concurrently to run all together
I don't really want to put frontend and backend in the same running server (same port)
I'm quite new in mastering servers so I don't kown nothing, sorry.
Thanks for the help.
PS: I'm french, so sorry for the bad English :)
Hello since i dont have that much information i will stay general.
What you want is a reverse proxy. You can use http-proxy-middleware: https://www.npmjs.com/package/http-proxy-middleware.
Let's say you run your frontend on http://example.com:3000 and your backend on http://example.com:3001.
Now lets say you want http://example.com:3001/42-is-the-answer to point to http://example.com:3000 (You can add a path if you want).
The only thing to do would be to use a proxy on the server instance of example.com:3001 like so:
const proxy = require("http-proxy-middleware");
const app = express();
....
app.use( proxy("/42-is-the-answer", {
target: "http://example.com:3000/"
}))
Now if you access http://example.com:3001/42-is-the-answer, the request will be proxied to http://example.com:3000.
I hope this helps.
I have an angular universal app set up. I do POST requests on the server-side using localhost to pre-render my app and this works fine.
An example working url would be http://localhost:8000/api/get-info.
I've now put the app into production on an external url (apache server). I'm also using ssl.
Now when I try to do a POST request on the server-side to pre-render my app, I get back a response with status: 0, url: null (I'm assuming this means the connection was refused).
An example non-working url would be https://mywebsite.com/api/get-info.
What really stumps me is that when the app loads on the client, all HTTPS requests start working. So the problem is I cannot get the express server to send POST requests to my external url.
I've tested a post request on the server-side to a different website (twitter), and that seems to work fine as well. So i'm not entirely sure where I've gone wrong.
I already have CORS set to '*' as well.
Try using
http://localhost:8000/api/get-info
in production as well. Since your Angular app is rendered on the same server as your API is running, using localhost should just work fine. It doesn't matter if you are on an external URL.
I do something similar (its a GET but that shouldn't matter) with my translations:
if ( this.isServer ) {
translateLoader.setUrl( 'http://localhost:4000/assets/localization/' );
} else {
translateLoader.setUrl( 'assets/localization/' );
}
It works locally and in production (both server and client).
I just encountered this problem myself for two days. Please take a look at my comment on https://github.com/angular/universal/issues/856#issuecomment-426254727.
Basically what I did was I did a conditional check in Angular to see if the APP is running in browser or in server (rendered by Angular Universal), and change my API endpoint to actual IP in https or localhost in http accordingly. Also in my Nginx setting, I only redirect incoming request from browser to https by checking if the server_name is localhost.
Hope it helps!
I'm working in an application which delivers push content to a group of web applications hosted in different domains. I'm using Sails.js and Socket.io, and structured it like this:
The client script, running on each web application's client's browser, is something like:
socket.on('customEvent', function(message){
//do something with message on event trigger
}
And then, in the server, the event 'customEvent' is emitted when needed, and it works (e.g. on the onConnect event: sails.io.emit('customEvent',{message ...}).
But I'm facing a problem when it comes to handle authorization. The first approach I've tried is a cookie-based auth, as explained here (by changing the api/config/sockets.js function authorizeAttemptedSocketConnection), but it isn't a proper solution for production and it isn't supported in browsers with a more restrictive cookie policy (due to their default prohibition to third-party cookies).
My question is: how to implement a proper cross-browser and cross-domain authorization mechanism using sails.js, that can be supported in socket.io's authorization process?
======
More details:
I also tried adding a login with a well-known oauth provider (e.g. facebook), using this example as a base. I've got the Passport session, but I'm still unable to authenticate from the client script (it only works in pages hosted by my node app directly).
A JSONP request to obtain the session might be a solution, but it didn't work well in Safari. Plus I prefer the user to be authenticated in one of the web apps, rather than in my node application directly.
I believe your routes must handle CORS mate. Example:
'/auth/logout': {
controller: 'AuthController',
action: 'logout',
cors: '*'
},
Of course you can specify the list of ip your are accepting (and then replace the '*').
Worth mentionning that you must specify where socket.io has to connect to (front-end JS):
socket = io.connect(API.url);
For any common http GET/PUT/POST/DELETE, please ensure that your ajax envelope goes with the credentials (cookie). For example with angular:
$httpProvider.defaults.withCredentials = true
Let me know how it goes.