I have asp.net core 2.2.1 app hosted in IIS.
How do I enforce HTTPS except when the request if from/to localhost.
For condition for UseHttpsRedirection, you could try MapWhen like below:
app.MapWhen(context =>
{
var url = context.Request.Path.Value;
return url.Contains("localhost") && !context.Request.IsHttps;
}, subapp =>
{
subapp.UseHttpsRedirection();
});
Related
In order to properly build my urls in my xml sitemaps and rss feeds I want to determine if the webpage is currently served over http or https, so it also works locally in development.
export default function handler(req, res) {
const host = req.headers.host;
const proto = req.connection.encrypted ? "https" : "http";
//construct url for xml sitemaps
}
With above code however also on Vercel it still shows as being served over http. I would expect it to run as https. Is there a better way to figure out http vs https?
As Next.js api routes run behind a proxy which is offloading to http the protocol is http.
By changing the code to the following I was able to first check at what protocol the proxy runs.
const proto = req.headers["x-forwarded-proto"];
However this will break the thing in development where you are not running behind a proxy, or a different way of deploying the solution that might also not involve a proxy. To support both use cases I eventually ended up with the following code.
const proto =
req.headers["x-forwarded-proto"] || req.connection.encrypted
? "https"
: "http";
Whenever the x-forwarded-proto header is not present (undefined) we fall back to req.connection.encrypted to determine if we should serve on http vs https.
Now it works on localhost as well a Vercel deployment.
my solution:
export const getServerSideProps: GetServerSideProps = async (context: any) => {
// Fetch data from external API
const reqUrl = context.req.headers["referer"];
const url = new URL(reqUrl);
console.log('====================================');
console.log(url.protocol); // http
console.log('====================================');
// const res = await fetch(`${origin}/api/projets`)
// const data = await res.json()
// Pass data to the page via props
return { props: { data } }
}
We are building a multi_tenant solution with NodeJS/Express for the back end and VueJS/Nuxt for the front-end. Each tenant will get their own subdomain like x.mysite.com, y.mysite.com, etc.
How can we make both our back end and front-end read the subdomain name and share with each other?
I have some understanding that in the Vue client, we can read suvdomain using window.location. But I think that's too late. Is there a better way? And what about the node /express setup? How do we get the suvidhaon info there?
Note that Node/Express server is primarily an API to interface with database and for authentication.
Any help or insight to put us on the right path is appreciated.
I'm doing something similar in my app. My solution looks something like this...
Front End: In router.vue, I check the subdomain to see what routes to return using window.location.host. There is 3 options
no subdomain loads the original routes (mysite.com)
portal subdomain loads the portal routes (portal.mysite.com)
any other subdomain loads the routes for the custom client subdomain, which can be anything and is dynamic
My routes for situation #3 looks like this:
import HostedSiteHomePage from 'pages/hostedsite/hosted-site-home'
export const hostedSiteRoutes = [
{ path: '*', component: HostedSiteHomePage }
]
The asterisk means that any unmatched route will fallback to it.
In your fallback page (or any page), you will want this (beforeMount is the important part here):
beforeMount: function () {
var host = window.location.host
this.subdomain = host.split('.')[0]
if (this.subdomain === 'www') subdomain = host.split('.')[1]
this.fetchSiteContent()
},
methods: {
fetchSiteContent() {
if (!this.subdomain || this.subdomain === 'www') {
this.siteContentLoaded = true
this.errorLoadingSite = true
return
}
// send subdomain to the server and get back configuration object
http.get('/Site/LoadSite', { params: { site: this.subdomain } }).then((result) => {
if (result && result.data && result.data.success == true) {
this.siteContent = result.data.content
} else {
this.errorLoadingSite = true
}
this.siteContentLoaded = true
}).catch((err) => {
console.log("Error loading " + this.subdomain + "'s site", err)
this.errorLoadingSite = true
this.siteContentLoaded = false
})
},
}
I store a configuration object in json in the database for the subdomain, and return that to the client side for a matching subdomain then update the site to match the information/options in the config object.
Here is my router.vue
These domain names are supported:
mysite.com (loads main/home routes)
portal.mysite.com (loads routes specific to the portal)
x.mysite.com (loads routes that support dynamic subdomain, fetches config from server)
y.mysite.com (loads routes that support dynamic subdomain, fetches config from server)
localhost:5000 (loads main/home routes)
portal.localhost:5000 (loads routes specific to the portal)
x.localhost:5000 (loads routes that support dynamic subdomain, fetches config from server)
y.localhost:5000 (loads routes that support dynamic subdomain, fetches config from server)
import Vue from 'vue'
import VueRouter from 'vue-router'
// 3 different routes objects in routes.vue
import { portalRoutes, homeRoutes, hostedSiteRoutes } from './routes'
Vue.use(VueRouter);
function getRoutes() {
let routes;
var host = window.location.host
var subdomain = host.split('.')[0]
if (subdomain === 'www') subdomain = host.split('.')[1]
console.log("Subdomain: ", subdomain)
// check for localhost to work in dev environment
// another viable alternative is to override /etc/hosts
if (subdomain === 'mysite' || subdomain.includes('localhost')) {
routes = homeRoutes
} else if (subdomain === 'portal') {
routes = portalRoutes
} else {
routes = hostedSiteRoutes
}
return routes;
}
let router = new VueRouter({
mode: 'history',
routes: getRoutes()
})
export default router
As you can see I have 3 different set of routes, one of which is a set of routes that supports dynamic subdomains. I send a GET request to the server once i load the dynamic subdomain page and fetch a configuration object that tells the front end what that site should look like.
I have a nodejs REST api hosted on localhost and I have a nodejs app that is consuming it. This app too is running on localhost. Everything was working fine but after a restart the webapp just could not connect to the REST api anymore. I am running Windows 10.
I tested the REST api with postman and also with browser, it worked. There is no issue with the REST api.
Tried changing the port numbers - same result.
I ran wireshark to see the difference between when requesting from browser and from nodejs webapp. Below is the screenshot. First two lines are when the nodejs app made the request and the next two are from browser.
I am not able to understand what is wrong here. I tried with a standalone nodejs script, that too failed. Below is the script I used.
var request = require('request');
var u = "xxx";
var p = "xxx";
var auth = "Basic " + new Buffer(u + ":" + p).toString("base64");
var username = "qqqq";
var password = "eeee";
var options = {
url : 'http://localhost:4001/api/v1/transaction',
headers : {
"Authorization" : auth
},
};
console.log(options.url);
request.get(options, function(error,response,body){
//console.log(options);
//console.log(response);
console.log(response.statusCode);
console.log(body);
if (!error && response.statusCode == 200) {
var userObj = JSON.parse(body);
callback(userObj);
} else {
console.log("---- Error ----")
console.log(error);
}
});
I have found the problem and I am posting the answer in the hope that someone would find it useful.
My hint was from the wireshark. (screenshot in the question) All successful requests went to [::1] not localhost or 127.0.0.1. After the reboot of the windows 10 machine, the REST api nodejs app was actually no longer serving on the ip v4 localhost but was serving on ip v6 localhost. There was absolutely no issue with the code.
Instead of using localhost in the url in the consuming webapp, I changed it to [::1] and it started to work.
.....
.....
var options = {
//url : 'http://localhost:4001/api/v1/transaction',
// replaced localhost with [::1]
url : 'http://[::1]:4001/api/v1/transaction',
headers : {
"Authorization" : auth
},
};
.....
.....
Scenario:
I have an express.js server which serves variations of the same static landing page based on where req.headers.host says the user is coming from - think sort of like A/B testing.
GET tulip.flower.com serves pages/flower.com/tulip.html
GET rose.flower.com serves pages/flower.com/rose.html
At the same time, this one IP is also responsible for:
GET potato.vegetable.com serving pages/vegetable.com/potato.html
It's important that these pages are served FAST, so they are precompiled and optimized in all sorts of ways.
The server now needs to:
Provide separate certificates for *.vegetables.com, *.fruits.com, *.rocks.net
Optionally provide no certificate for *.flowers.com
Offer HTTP2
The problem is that HTTP2 mandates a certificate, and there's now multiple certificates in play.
It appears that it's possible to use multiple certificates on one Node.js (and presumably by extension Express.js) server, but is it possible to combine it with a module like spdy, and if so, how?
Instead of hacking node, would it be smarter to pawn the task of sorting out http2 and SSL to nginx? Should the caching network like Imperva or Akamai handle this?
You can use also tls.createSecureContext, Nginx is not necassary.
MY example here:
const https = require("https");
const tls = require("tls");
const certs = {
"localhost": {
key: "./certs/localhost.key",
cert: "./certs/localhost.crt",
},
"example.com": {
key: "./certs/example.key",
cert: "./certs/example.cert",
ca: "./certs/example.ca",
},
}
function getSecureContexts(certs) {
if (!certs || Object.keys(certs).length === 0) {
throw new Error("Any certificate wasn't found.");
}
const certsToReturn = {};
for (const serverName of Object.keys(certs)) {
const appCert = certs[serverName];
certsToReturn[serverName] = tls.createSecureContext({
key: fs.readFileSync(appCert.key),
cert: fs.readFileSync(appCert.cert),
// If the 'ca' option is not given, then node.js will use the default
ca: appCert.ca ? sslCADecode(
fs.readFileSync(appCert.ca, "utf8"),
) : null,
});
}
return certsToReturn;
}
// if CA contains more certificates it will be parsed to array
function sslCADecode(source) {
if (!source || typeof (source) !== "string") {
return [];
}
return source.split(/-----END CERTIFICATE-----[\s\n]+-----BEGIN CERTIFICATE-----/)
.map((value, index: number, array) => {
if (index) {
value = "-----BEGIN CERTIFICATE-----" + value;
}
if (index !== array.length - 1) {
value = value + "-----END CERTIFICATE-----";
}
value = value.replace(/^\n+/, "").replace(/\n+$/, "");
return value;
});
}
const secureContexts = getSecureContexts(certs)
const options = {
// A function that will be called if the client supports SNI TLS extension.
SNICallback: (servername, cb) => {
const ctx = secureContexts[servername];
if (!ctx) {
log.debug(`Not found SSL certificate for host: ${servername}`);
} else {
log.debug(`SSL certificate has been found and assigned to ${servername}`);
}
if (cb) {
cb(null, ctx);
} else {
return ctx;
}
},
};
var https = require('https');
var httpsServer = https.createServer(options, (req, res) => { console.log(res, req)});
httpsServer.listen(443, function () {
console.log("Listening https on port: 443")
});
If you want test it:
edit /etc/hosts and add record 127.0.0.1 example.com
open browser with url https://example.com:443
Nginx can handle SSL termination nicely, and this will offload ssl processing power from your application servers.
If you have a secure private network between your nginx and application servers I recommend offloading ssl via nginx reverse proxy. In this practice nginx will listen on ssl, (certificates will be managed on nginx servers) then it will reverse proxy requests to application server on non ssl (so application servers dont require to have certificates on them, no ssl config and no ssl process burden).
If you don't have a secure private network between your nginx and application servers you can still use nginx as reverse proxy via configuring upstreams as ssl, but you will lose offloading benefits.
CDNs can do this too. They are basically reverse proxy + caching so I dont see a problem there.
Good read.
Let's Encrypt w/ Greenlock Express v3
I'm the author if Greenlock Express, which is Let's Encrypt for Node.js, Express, etc, and this use case is exactly what I made it for.
The basic setup looks like this:
require("greenlock-express")
.init(function getConfig() {
return {
package: require("./package.json")
manager: 'greenlock-manager-fs',
cluster: false,
configFile: '~/.config/greenlock/manager.json'
};
})
.serve(httpsWorker);
function httpsWorker(server) {
// Works with any Node app (Express, etc)
var app = require("./my-express-app.js");
// See, all normal stuff here
app.get("/hello", function(req, res) {
res.end("Hello, Encrypted World!");
});
// Serves on 80 and 443
// Get's SSL certificates magically!
server.serveApp(app);
}
It also works with node cluster so that you can take advantage of multiple cores.
It uses SNICallback to dynamically add certificates on the fly.
Site Management
The default manager plugin uses files on the file system, but there's great documentation on how to build your own.
Just to get started, the file-based plugin uses a config file that looks like this:
~/.config/greenlock/manager.json:
{
"subscriberEmail": "letsencrypt-test#therootcompany.com",
"agreeToTerms": true,
"sites": [
{
"subject": "example.com",
"altnames": ["example.com", "www.example.com"]
}
]
}
Very Extensible
I can't post all the possible options here, but it's very small and simple to start with, and very easy to scale out with advanced options as you need them.
I'm having troubles trying use http-proxy to route to localhost.
I'm using IISNODE but from a Console App is not working neither.
If "target" is set to google for example, it also works with local:9000 that is created in this snippet but it doesn't work with sites running in my local IIS
Any ideas?
UPDATE: the code snippet posted now worked for me, lot of work still missing tho.
var port = process.env.PORT;
var http = require('http'),
httpProxy = require('http-proxy'),
url = require('url');
// http Server
var proxy = new httpProxy.createServer({});
var httpServer = http.createServer(function (req, res) {
console.log('request received: ' + req.path);
var target = 'http://myapp';
if (!req.url.toString().startsWith('/')) {
target = target + '/';
}
target = target + req.url;
console.log('routing request to ' + target);
var urlObj = url.parse(req.url);
req.headers['host'] = urlObj.host;
req.headers['url'] = urlObj.href;
proxy.web(req, res, {
host: urlObj.host,
target: target,
enable: { xforward: true }
});
});
httpServer.listen(port);
String.prototype.endsWith = function (s) {
return this.length >= s.length && this.substr(this.length - s.length) == s;
}
String.prototype.startsWith = function (str) {
return this.indexOf(str) == 0;
};
I tested a simple node http-proxy outside iisnode and it works fine routing request to any sites, localhost iis or www.
But in iisnode, I can't make it work properly either. The request headers are not exactly the same and we're running on a named pipe instead of a tcp port, it's difficult to find what is causing the bad routing.
For instance, if we change req.url to '/', then it sends the request to the same domain the node app is running on, instead of the target domain.
A possible solution would be to use IIS reverse proxy (using ARR and URL Rewrite) to forward requests to your node proxy app running in standalone on a tcp port.
I have tested it and it works fine. I can give config example if needed.
Update: Here's how I made it work using IIS reverse proxy and a node proxy app. It doesn't use iisnode though:
A simple node proxyServer.js app:
var http = require('http')
var httpProxy = require('http-proxy')
var proxy = httpProxy.createProxyServer({})
var server = require('http').createServer(function(req, res) {
console.log(req.url)
proxy.web(req, res, { target: 'http://nodejs.org' })
}).listen(3000)
Running it using node.exe proxyServer.js
In IIS, we need to activate the ReverseProxy in Application Request Routing:
IIS server -> ARR:
Proxy settings...:
Enable the proxy like this:
And then I create a dedicated website node.mydomain.com and a rewrite rule for this website in IIS:
<rule name="ReverseProxyInboundNodeProxy" stopProcessing="true">
<match url="(.*)" />
<conditions>
<add input="{CACHE_URL}" pattern="^(https?)://" />
</conditions>
<action type="Rewrite" url="{C:1}://localhost:3000/{R:1}" />
</rule>
It's kind a "double proxy" solution but it works. I still have issue making a node proxy server runs through iisnode. Of course you need ARR and URL Rewrite installed in IIS.