NGINX Reverse Proxy microservices built with cherrypy - python-3.x

I'm building a smart greenhouse for a university project and we must follow a microservices architecture and use cherrypy. My proposed solution is having different microservices handling the different telemetrics using nginx, docker and docker-compose.
I'm using nginx for reverse proxy my front-end and all the microservices, but how do I handle the uri? Can nginx handle URIs with <ids>?
Cherrypy doesn't seen to provide horizontal scalling, the RESTful-style dispatcher provided seems like just provides monolithic approach.
My current NGINX:
server {
listen 80;
location / {
proxy_pass http://web:80;
}
location /api/v1/moisture {
proxy_pass http://moisture:5001;
}
location /api/v1/light {
proxy_pass http://moisture:5001;
}
}
My API should look something like this /api/v1/greenhouse/<id>/moisture, were moisture can be every telemetric I can measure like humidity or light.
The objective was that nginx could send a request to /api/v1/greenhouse/<id>/moisture to the moisture service and a request to /api/v1/greenhouse/<id>/humidity the humidity service, since cherrypy does not provide a solution.

Related

How to get SSL to play nice with non-ssl protocol data

Short background: If we go back in time to about 2006-ish: We (ie: my company) used a java client app embedded in the browser that connected via port 443 to a C program backend running on port 8068 on an in-house server. At the time when the java app was first developed, port 443 was the only port that we knew would not be blocked by our customers that used the software (ease of installation and possibly the customer in-house staff didn't have the power or knowledge to control their internal firewall).
Fast-forward to 2016, and I'm hired to help develop a NodeJS/Javascript version of that Java app. The Java app continues to be used during development of its replacement, but whoops - we learn that browsers will drop support for embedded Java in the near future. So we switch to Java Web Start, so that the customers can continue to download the app and it still connects to the in house server with it's port 443->8068 routing.
2017 rolls around and don't you know, we can't use the up-coming JS web-app with HTTPS/SSL and the Java app at the same time, 'cause they use the same port. "Ok let's use NGINX to solve the problem." But due to in house politics, customer needs, and a turn-over of web-developer staff, we never get around to truly making that work.
So here we are at 2020, ready to deploy the new web version of the client software, and the whole 443 mess rears it's ugly head again.
Essentially I am looking to allow (for the time being) the Java app to continue using 443, but now need to let the web app use HTTPS too. Back in 2017/2018 we Googled ways to let them cohabitate through NGINX, but we never really got them to work properly, or the examples and tutorials were incomplete or confusing. It seemed like we needed to either use streaming along the lines of https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ , or look at the incoming HTTPS header and do an 'if (https) { route to nodeJS server } else { assume it must be the java app and route to port 8068 }' -sort of arrangement inside the NGINX config file.
Past Googled links appear to not exist anymore, so if anyone knows of an NGINX configuration that allows an HTTPS website to hand off to a non-SSL application that still needs to use 443, I would greatly appreciate it. And any docs and/or tutorials that point us in the right direction would be helpful too. Thanks in advance!
You can do this using ssl_preread option. Basically, this option will allow access to the variable $ssl_preread_protocol, that contains the protocol negotiated at SSL port. If no valid protocol was detected, the variable will be empty.
Using this parameters, you could use the follow configuration to your environment:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server __your_node_js_server_ip__:443;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
In your case, this configuration will pass the connection directly to your nodejs and java backend servers, so, nodejs will need to negotiate the SSL. You can pass this work to NGiNX using another server context, like:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server 127.0.0.1:444;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 444 ssl;
__your_ssl_cert_configurations_here__
location / {
proxy_pass http://__your_nodejs_server_ip__:80;
}
}
}
You'll need NGiNX at least version 1.15.2 to this configuration to work, and compiled with ngx_stream_ssl_preread_module module (need to compile with --with-stream_ssl_preread_module configuration parameter, because this module is not built by default).
Source: https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/

Can I run multiple loopback.io apps on same port?

Referring to the following question:
Running multiple Node (Express) apps on same port
Can I run multiple apps (backend, api rest) on same port, if I am using strongloop loopback to generates my Node app?
Generally what you will be doing is running multiple instances of your app on different ports and have some sort of load balancer in front switching among the instances and thus exposing it as one port.
Assuming you've started 3 instances on ports 3001, 3002, and 3003, you can do it in nginx like this:
http {
upstream myloopbackapp {
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
listen 80;
location / {
proxy_pass http://myloopbackapp;
}
}
}
Further reading: http://nginx.org/en/docs/http/load_balancing.html
There are equally easy ways to do this in Apache and IIS as well.

Limit users on nginx using wildcard

I'm starting a work with Nginx to reverse proxy my app for internet access outside my customer's network.
I managed to make it work, limiting the URLs that need to be exposed etc, but one thing is still due to finish my work.
I want to limit user access based on the username. But instead of creating an if for every user I want to block, I would like to use a wildcard because all the users I want to block ends with a specific string: #saspw
Sample of my /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($remote_user = '*#saspw'){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
In the $remote_user if, I would like that all users with its username ending in #saspw get a 401 error, (which I will change to a 404 later).
It does only work if I put the whole username, like joe#saspw. Using a wildcard (*,?) does not work.
Is there a way to make $remote_user solve wildcards that way?
Thank you,
Joe
Use map nginx module
map $remote_user $is_denied {
default 0;
"~.*#saspw" 1;
}
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($is_denied){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
It lets you to use regexes. Note that map must be outside server directive.

Dividing express routes among Node JS Clusters

I have a large set of routes in a Node JS application I'm trying to scale to multiple CPU cores (via NodeJS clusters).
The plan I had in mind was to have different workers handling a different set of express.js routes. For example:
/api/ requests handled by WorkerA
/admin/ handled by WorkerB
/blog/ handled by WorkerC
etc
Simply using a conditional with the worker ID is not sufficient, since requests can still land at the wrong worker. Also, the processes all run on the same port, so I can't just match & proxy_pass on the URL from inside nginx.
At this point, I'm thinking about swapping out the cluster routing (from master to worker) to match on the URL and route to the correct worker instead of just using the built-in round-robin approach. But this seems a bit hacky and I'm wondering if anyone else has solved this, or might have any other idea.
My solution was to run multiple express apps listened on different ports, and set a Nginx server at front to proxy requests
Say you have three express apps, each one would handle a specific type of routers, and listen on separate port (8081, 8082, 8083), and of course, they should run in cluster mode:
//API app used to handle /api routing
apiApp.listen(8081);
//Admin app used to handle /admin routing
adminApp.listen(8082);
//Blog app used to handle /blog routing
blogApp.listen(8083);
And config the Nginx server to proxy the requests:
server {
# let nginx server running on a public port
listen 80;
location /api {
proxy_pass http://127.0.0.1:8081
}
location /admin {
proxy_pass http://127.0.0.1:8082
}
location /blog {
proxy_pass http://127.0.0.1:8083
}
}
proxy_pass simply tells nginx to forward requests to /api to the server listening on 8081. You can check the full document here

How to run multiple StrongLoop LoopBack apps on the same server?

I'm currently running two StrongLoop LoopBack apps (Nodejs apps) on a single server with different ports. Both apps were created using slc lb project and slc lb model from the command line.
Is it possible to run these apps on a single ports with different path and/or subdomain? If it is, how do I do that on a Linux machine?
Example:
http://api.server.com:3000/app1/ for first app.
http://api.server.com:3000/app2/ for second app.
thanks.
Since LoopBack applications are regular Express applications, you can mount them on a path of the master app.
var app1 = require('path/to/app1');
var app2 = require('path/to/app2');
var root = loopback(); // or express();
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
The obvious drawback is high runtime coupling between app1 and app2 - whenever you are upgrading either of them, you have to restart the whole server (i.e. both of them). Also a fatal failure in one app brings down the whole server.
The solution presented by #fiskeben is more robust, since each app is isolated.
On the other hand, my solution is probably easier to manage (you have only one Node process instead of nginx + per-app Node processes) and also allows you to configure middleware shared by both apps.
var root = loopback();
root.use(express.logger());
// etc.
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
You would need some sort of proxy in front of your server, for example nginx. nginx will listen to a port (say, 80) and redirect incoming requests to other servers on the machine based on some rules you define (hostname, path, headers, etc).
I'm no expert on nginx but I would configure it something like this:
server {
listen: 80;
server_name api.server.com;
location /app1 {
proxy_pass http://localhost:3000
}
location /app2 {
proxy_pass http://localhost:3001
}
}
nginx also supports passing query strings, paths and everything else, but I'll leave it up to you to put the pieces together :)
Look at the proxy server documentation for nginx.

Resources