I'm starting a work with Nginx to reverse proxy my app for internet access outside my customer's network.
I managed to make it work, limiting the URLs that need to be exposed etc, but one thing is still due to finish my work.
I want to limit user access based on the username. But instead of creating an if for every user I want to block, I would like to use a wildcard because all the users I want to block ends with a specific string: #saspw
Sample of my /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($remote_user = '*#saspw'){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
In the $remote_user if, I would like that all users with its username ending in #saspw get a 401 error, (which I will change to a 404 later).
It does only work if I put the whole username, like joe#saspw. Using a wildcard (*,?) does not work.
Is there a way to make $remote_user solve wildcards that way?
Thank you,
Joe
Use map nginx module
map $remote_user $is_denied {
default 0;
"~.*#saspw" 1;
}
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($is_denied){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
It lets you to use regexes. Note that map must be outside server directive.
Related
Short background: If we go back in time to about 2006-ish: We (ie: my company) used a java client app embedded in the browser that connected via port 443 to a C program backend running on port 8068 on an in-house server. At the time when the java app was first developed, port 443 was the only port that we knew would not be blocked by our customers that used the software (ease of installation and possibly the customer in-house staff didn't have the power or knowledge to control their internal firewall).
Fast-forward to 2016, and I'm hired to help develop a NodeJS/Javascript version of that Java app. The Java app continues to be used during development of its replacement, but whoops - we learn that browsers will drop support for embedded Java in the near future. So we switch to Java Web Start, so that the customers can continue to download the app and it still connects to the in house server with it's port 443->8068 routing.
2017 rolls around and don't you know, we can't use the up-coming JS web-app with HTTPS/SSL and the Java app at the same time, 'cause they use the same port. "Ok let's use NGINX to solve the problem." But due to in house politics, customer needs, and a turn-over of web-developer staff, we never get around to truly making that work.
So here we are at 2020, ready to deploy the new web version of the client software, and the whole 443 mess rears it's ugly head again.
Essentially I am looking to allow (for the time being) the Java app to continue using 443, but now need to let the web app use HTTPS too. Back in 2017/2018 we Googled ways to let them cohabitate through NGINX, but we never really got them to work properly, or the examples and tutorials were incomplete or confusing. It seemed like we needed to either use streaming along the lines of https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ , or look at the incoming HTTPS header and do an 'if (https) { route to nodeJS server } else { assume it must be the java app and route to port 8068 }' -sort of arrangement inside the NGINX config file.
Past Googled links appear to not exist anymore, so if anyone knows of an NGINX configuration that allows an HTTPS website to hand off to a non-SSL application that still needs to use 443, I would greatly appreciate it. And any docs and/or tutorials that point us in the right direction would be helpful too. Thanks in advance!
You can do this using ssl_preread option. Basically, this option will allow access to the variable $ssl_preread_protocol, that contains the protocol negotiated at SSL port. If no valid protocol was detected, the variable will be empty.
Using this parameters, you could use the follow configuration to your environment:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server __your_node_js_server_ip__:443;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
In your case, this configuration will pass the connection directly to your nodejs and java backend servers, so, nodejs will need to negotiate the SSL. You can pass this work to NGiNX using another server context, like:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server 127.0.0.1:444;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 444 ssl;
__your_ssl_cert_configurations_here__
location / {
proxy_pass http://__your_nodejs_server_ip__:80;
}
}
}
You'll need NGiNX at least version 1.15.2 to this configuration to work, and compiled with ngx_stream_ssl_preread_module module (need to compile with --with-stream_ssl_preread_module configuration parameter, because this module is not built by default).
Source: https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
I'm building a smart greenhouse for a university project and we must follow a microservices architecture and use cherrypy. My proposed solution is having different microservices handling the different telemetrics using nginx, docker and docker-compose.
I'm using nginx for reverse proxy my front-end and all the microservices, but how do I handle the uri? Can nginx handle URIs with <ids>?
Cherrypy doesn't seen to provide horizontal scalling, the RESTful-style dispatcher provided seems like just provides monolithic approach.
My current NGINX:
server {
listen 80;
location / {
proxy_pass http://web:80;
}
location /api/v1/moisture {
proxy_pass http://moisture:5001;
}
location /api/v1/light {
proxy_pass http://moisture:5001;
}
}
My API should look something like this /api/v1/greenhouse/<id>/moisture, were moisture can be every telemetric I can measure like humidity or light.
The objective was that nginx could send a request to /api/v1/greenhouse/<id>/moisture to the moisture service and a request to /api/v1/greenhouse/<id>/humidity the humidity service, since cherrypy does not provide a solution.
I have a large set of routes in a Node JS application I'm trying to scale to multiple CPU cores (via NodeJS clusters).
The plan I had in mind was to have different workers handling a different set of express.js routes. For example:
/api/ requests handled by WorkerA
/admin/ handled by WorkerB
/blog/ handled by WorkerC
etc
Simply using a conditional with the worker ID is not sufficient, since requests can still land at the wrong worker. Also, the processes all run on the same port, so I can't just match & proxy_pass on the URL from inside nginx.
At this point, I'm thinking about swapping out the cluster routing (from master to worker) to match on the URL and route to the correct worker instead of just using the built-in round-robin approach. But this seems a bit hacky and I'm wondering if anyone else has solved this, or might have any other idea.
My solution was to run multiple express apps listened on different ports, and set a Nginx server at front to proxy requests
Say you have three express apps, each one would handle a specific type of routers, and listen on separate port (8081, 8082, 8083), and of course, they should run in cluster mode:
//API app used to handle /api routing
apiApp.listen(8081);
//Admin app used to handle /admin routing
adminApp.listen(8082);
//Blog app used to handle /blog routing
blogApp.listen(8083);
And config the Nginx server to proxy the requests:
server {
# let nginx server running on a public port
listen 80;
location /api {
proxy_pass http://127.0.0.1:8081
}
location /admin {
proxy_pass http://127.0.0.1:8082
}
location /blog {
proxy_pass http://127.0.0.1:8083
}
}
proxy_pass simply tells nginx to forward requests to /api to the server listening on 8081. You can check the full document here
I'm currently running two StrongLoop LoopBack apps (Nodejs apps) on a single server with different ports. Both apps were created using slc lb project and slc lb model from the command line.
Is it possible to run these apps on a single ports with different path and/or subdomain? If it is, how do I do that on a Linux machine?
Example:
http://api.server.com:3000/app1/ for first app.
http://api.server.com:3000/app2/ for second app.
thanks.
Since LoopBack applications are regular Express applications, you can mount them on a path of the master app.
var app1 = require('path/to/app1');
var app2 = require('path/to/app2');
var root = loopback(); // or express();
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
The obvious drawback is high runtime coupling between app1 and app2 - whenever you are upgrading either of them, you have to restart the whole server (i.e. both of them). Also a fatal failure in one app brings down the whole server.
The solution presented by #fiskeben is more robust, since each app is isolated.
On the other hand, my solution is probably easier to manage (you have only one Node process instead of nginx + per-app Node processes) and also allows you to configure middleware shared by both apps.
var root = loopback();
root.use(express.logger());
// etc.
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
You would need some sort of proxy in front of your server, for example nginx. nginx will listen to a port (say, 80) and redirect incoming requests to other servers on the machine based on some rules you define (hostname, path, headers, etc).
I'm no expert on nginx but I would configure it something like this:
server {
listen: 80;
server_name api.server.com;
location /app1 {
proxy_pass http://localhost:3000
}
location /app2 {
proxy_pass http://localhost:3001
}
}
nginx also supports passing query strings, paths and everything else, but I'll leave it up to you to put the pieces together :)
Look at the proxy server documentation for nginx.
Very simply, I'm currently using Express's vhost method to route requests to the appropriate script given a domain name. I really like this route since it means I don't need to have separate ports being listened to by separate node.js instances for each virtual host script and I also don't need a process for each virtual host. However, there is a glaring flaw for me using this method. By using this method, anything in the vhost server has root privileges and not merely the privileges of the user whose script it is. I need know find some way of sandboxing or otherwise running the vhost server as the user that it belongs too. Needless to say, I can't have lower privileged users on the server with access to the root.
TL;DR, What method exists by which I can route requests to different domain name's associated apps without the need to designate ports of which the app would need to know and still disable the author of that script from having access beyond their own user account?
In my apps, I use Bouncy:
var bouncy = require( "bouncy" );
var server = bouncy(function( req, res, bounce ) {
var port;
var subdomain = req.headers.host.split( "." )[ 0 ];
switch ( subdomain ) {
case "xyz":
port = 4002;
break;
default:
port = 4001;
break;
}
bounce( port );
});
server.listen( 4000 );
This way, you may have various apps listening on different ports and on different processes. They all will be proxied to work under the port 4000 aswell, so:
xyz.localhost:4002 = xyz.localhost:4000
localhost:4001 = localhost:4000
I hope it helps ;)