Using Nginx as proxy for my SSL protected API server - node.js

I'm struggling to find a solution to this, or even get information if it is even possible.
I have a NodeJS backend, which currently accesses APIs on a SSL protected server (let's call it https://whatever.com).
I would like to proxy those http calls through Nginx so I could better collect rich log data, which is very easy for to get from Nginx. (Not so easy from Node, which would require a change in the backend, plugins and whatnot).
It seems, from the "solutions" I've found, that this is not possible unless I have a domain name to use for this Nginx, so I could get a proper (and not self-signed) certificate. Is this accurate? It is only an internal webserver, to proxy outgoing connections, it would not be publicly available.
If I do something and straightfoward (which I would if it was pure 'http') as
upstream apigw {
server whatever.com:443;
}
server {
listen 8088;
server_name nodeapiproxy;
location / {
proxy_pass https://apigw;
}
}
I get a weird 'Unknown domain' error which tells me very little.
Ideally, I would like to make port-80 calls to this Nginx, and it would forward them to whatever.com. But port-443-https would be fine too, if only I could get it working with a self-signed certificate or, even better, with no certificate at all.

Related

How might one set up a reverse proxy that cannot decrypt traffic?

I'd like to have a reverse HTTPS proxy that CANNOT decrypt proxied traffic (ie an HTTPS passthrough/tunnel). The idea is to run this proxy on a VPS and point a domain to it, allowing for the IP address of the origin server to remain unexposed while maintaining end-to-end encryption.
Is this possible? I could probably proxy requests without difficulty since the destination address in that direction is fixed, but proxying responses seems problematic given that the proxy would be unable to read the client IP within an encrypted response.
A potential solution is to have the origin server package the encrypted response and destination address in a request made to the proxy, but I am unsure as to how I might generate the encrypted request without sending it (using node.js, which is the application running on the origin server).
From your question, I got that you want to listen to requests from your VPC server and pass the request to your other server which has to remain unexposed.
This can be configured with the web server which you are using for proxy ( considering AWS allows port forwarding from a VPN server to non-VPN server ).
I prefer doing this with Nginx as it is easy, open-source with less code and more functionality.
There is a concept of load balancing which does the same as you mentioned above.
steps :
Install Nginx and keep it active.
Create a new configuration file in /etc/nginx/sites-enabled
write the below code with modifications:
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
and at the place of srv1.example.com and srv2.example.com add the domain to which you want to redirect requests
Save the file and restart the Nginx
Boom!! it should redirect all incoming requests to your application.

Nginx:504 Gateway Timeout

I am using Nginx as my https server to serve my http content from my node server.
I am also hosting my server on google cloud.
I have been keep getting a 504 Gateway Timeout Error; So I wonder if it is because I didnt set my upstream server (node server) 8080 port open. Then it works. Not so sure if it is the correct way to do it
But then I kept looking other docs or tutorial online. I never see people configure in such way to connect to node server. They mainly only left the port 80 opened. So I wondered if my config in server block causing the 504 gateway problem
----------second update
this is my setting, and the default_server is written by default
but i always see doc have included a variable - server_name ; Actually I dont quite understand this varibale. May I know should I consider it or not for later use, although it works now
Aside, I got an
Server Error from my app.
FetchError: request to https://34.96.213.54:443/search/guest2 failed, reason: self-signed certificate
Why is that it works on chrome,although I get that api directly and postman successfully.
third updated------
About self-signed certificate: You need to buy one or using a free service like https://letsencrypt.org .Beside that your questions are so basic so you have to research more on nginx docs (http://nginx.org/en/docs/http/server_names.html)

How do I offload from https to http on NGINX?

This question has been asked awhile ago but I am not sure it fits my needs so I want to explain my usage.
First warn, I am a noob.
We have an nginx reverse proxy with a cert. It directs to another nginx app server without a cert (internal communications don't need to be over https). Basically want to off load from https to http internally.
How do we configure it so we hit the app server on port 80? It still appears to be hitting the app server on 443. Getting an ERR_CERT_COMMON_NAME_INVALID error. I assume that it is being thrown by the app server.
In proxy.conf we have set:
proxy_pass http://<app server ip address>
You don't want to redirect, you want to proxy.
It sounds like the certificate on the nginx proxy server is not correct. Specifically that the certificate and the domain don't match
location /some/path/ {
proxy_pass http://www.example.com/link/;
}
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

Create a Reverse Proxy in NodeJS that can handle multiple secure domains

I'm trying to create a reverse proxy in NodeJS. But I keep running the issue that in that I can only serve one one set of cert/key pair on the same port(443), even though I want to serve multiple domains. I have done the research and keep running into teh same road block:
A node script that can serve multiple domains secure domain from non-secure local source (http local accessed and served https public)
Let me dynamically server SSL certificates via domain header
Example:
https ://www.someplace.com:443 will pull from http ://thisipaddress:8000 and use the cert and key files for www.someplace.com
https ://www.anotherplace.com:443 will pull from http ://thisipaddress:8080 and use the cert and key files for www.anotherplace.com
ect.
I have looked at using NodeJS's https.createServer(options, [requestListener])
But this method supports just one cert/key pair per port
I can't find a way to dynamically switch certs based on domain header
I can't ask my people to use custom https ports
And I'll run into browse SSL certificate error if I serve the same SSL certificate for multiple domain names, even if it is secure
I looked at node-http-proxy but as far as I can see it has the same limitations
I looked into Apache mod-proxy and nginx but I would rather have something I have more direct control of
If anyone can show me an example of serving multiple secure domains each with their own certificate from the same port number (443) using NodeJS and either https.createServer or node-http-proxy I would be indebted to you.
Let me dynamically server SSL certificates via domain header
There is no domain header so I guess you mean the Host header in the HTTP request.
But, this will not work because
HTTPS is HTTP encapsulated inside SSL
therefore you first have to do your SSL layer (e.g. SSL handshake, which requires the certificates), then comes the HTTP layer
but the Host header is inside the HTTP layer :(
In former times you would need to have a single IP address for each SSL certificate. Current browsers do support SNI (server name indication), which sends the expected target host already inside the SSL layer. It looks like node.js does support this, look for SNICallback.
But, beware that there are still enough libraries out there, which either don't support SNI on the client side at all or where one needs to use it explicitly. But, as long you only want to support browsers this should be ok.
Redbird actually does this very gracefully and not too hard to configure either.
https://github.com/OptimalBits/redbird
Here is the solution you might be looking at,
I found it very useful for my implementation
though you will need to do huge customization to handle domains
node-http-rev proxy:
https://github.com/nodejitsu/node-http-proxy
Bouncy is a good library to do this and has an example of what you are needing.
As Steffen Ullrich says it will depend on the browser support for it
How about creating the SSL servers on different ports and using node-http-proxy as a server on 443 to relay the request based on domain.
You stated you don't want to use nginx for that, and I don't understand why. You can just setup multiple locations for your nginx. Have each of them listen to different hostnames and all on port 443.
Give all of them a proxypass to your nodejs server. To my understanding, that serves all of your requirements and is state of the art.

How to make Node.js Multi-tenant for websites on port 80?

My end goal is to make node.js more cost effective per each server instance.
I'm not running a game or chat room but rather simple websites for customers. I would like to house multiple clients on a single server yet have multiple websites running off of port 80 using host header mapping. I would like to still use express as I'm doing but have it be more like a routing thing from port 80 to the other node apps if that is even possible. Node can be cheaper if its done in this way but currently its more expensive for my purposes as each customer would need their own box if running on port 80. Also, my motivation is to focus on node development but there must be a reason to do so in terms of cost.
I do this quite a lot for ASP.NET in Windows as IIS supports this out of the box and I know this is something normal for Apache as well.
Feel free to move this to another forum in stack exchange if this is not the right question or give constructive criticism rather than a random downvote. Thanks.
update
The approach I took was to use static hosting (via gatspy and s3) then an API instead that registered domains through post message from the client and API keys from the server and generates static sites periodically as sites change but thanks for all the suggestions!
In theory, you could build a pure-Node web server that emulated the functionality of Apache/Lighttpd/Nginx, but I wouldn't recommend it. In fact, for serious production services, I'd recommend ALWAYS fronting your service with Nginx or an equivalent (see this and this).
Here's how a simple Nginx config would work for two subservices exposed on port 80.
worker_processes 4;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/html;
server {
listen 80;
server_name service1.mydomain.com
location / {
proxy_pass http://127.0.0.1:3000/;
}
}
server {
listen 80;
server_name service2.mydomain.com
location / {
proxy_pass http://127.0.0.1:3001/;
}
}
}
I've seen production boxes kernel panic because Node doesn't throttle load by default and was prioritizing accepting new connections over handling existing requests - granted, it "shouldn't" have crashed the kernel, but it did. Also, by running on port 3000, you can run your Node service as non-root with very few permissions (and still proxy it so that it appears to be on port 80). You can also spread load between multiple workers, serve statics, log requests, rewrite urls, etc, etc, etc. Nginx is very fast (much lighter than Apache). The overhead of same-box proxy forwarding is minimal and buys you so much functionality and robustness that it's a slam dunk in my book. Even minor stuff, like - when I crash or overload my node service, do user get a black hole, or a "pardon our dust, our servers are being maintained" splash.
What about using a proper reverse proxy, like HAProxy, have the proxy listen on port 80, and delegate to multiple node instances on non public ports (e.g. 10000, 10001 etc.), based on headers.host?

Resources