I have configured the load balancer to route the request to two of Ec2 Instance running a NodeJs server. I need to direct the request coming from both http (port 80) and https (port 443) to http (port 80) of the EC2 instances in NodeJs. I have uploaded the ssl certificate to AWS and configured the load balancer to use ssl certificate. The problem is the request coming from http port doesn't automatically route to https. It has to be a server side script or snipped which I need to write in server.js which should be routing the http to https, i tried to do it and it run into endless redirection. So questions -
Is there any guide to do this from AWS ?
If not then how one can achieve this, any pointers or suggestions would be greatly appreciated.
On the server side you can check the X-Forwarded-Proto
(original request protocol) and if it's heaving value http you can send redirect (http 302) to a url with https protocol..
though with ALB (application load balancer you may specify a set of rules, maybe it's possible to do that there..)
I couldn't find a guide from AWS, but I will keep searching and update the answer in the case I find it.
Usually, when you write applications in Node.js, you specify which port should your app run at. It means that you will need two different servers listening. And when your app receives a request on port 80 (HTTP), it should redirect to your HTTPS server, like in this answer.
Another point that may be relevant to your question is that, in production environments, you don't usually bind a port to your Node.js server, since it's not production ready. You probably want to use a reverse proxy and load balancer like Nginx or HAProxy.
If you are using the AWS ALB (Application Load Balancer) they announced the http->https redirect today. Take a look: https://exampleloadbalancer.com/redirect_demo.html
Put your ELB behind the Cloudfront and in settings of your distribution select forward HTTP to HTTPS.
The following doc will be helpful
https://docs.aws.amazon.com/waf/latest/developerguide/tutorials-ddos-cross-service-ELB.html
This method has two benefit:
1-Your problem will be solve
2-You can use the benefit of the powerful CDN, for more information about Cloudfront read https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Update:
You can forward traffic from HTTP to HTTPS by edit your Listeners setting in your ELB.
Related
A typical request gets processed like this:
request -> Nginx reverse proxy -> AWS EC2 -> Express API/FastAPI -> response
but my biggest confusion is why a FastAPI absolutely NEEDS Nginx to work, but an Express API doesn't (despite nodejs and python both having the http module and hence able to make web servers). Why do I need Nginx at all for FastAPI? Can't an AWS EC2 instance act as a web server like Nginx?
This post says it is so we can hide the port number in the url, but that being the only reason sounds unreasonable to me.
Why is a web server (i.e Nginx) REQUIRED for FastAPI
Nginx is not required for FastAPI. You can listen on external ports and handle requests without a reverse proxy. Even FastAPI documentation includes setting up Nginx under the Advanced section (Ref)
This post says it's so we can hide the port number in the url, but this being the only reason seems silly to me.
You can still hide the port number in the url if you run the Express app with sudo and listen on port 80 or 443.
Can't an AWS EC2 instance act as a web server like Nginx?
Yes it can.
There are benefits of using a proxy server like Nginx. Proxy servers can handle load balancing, caching, SSL termination and they can handle large number of concurrent connections efficiently.
Using a proxy server is a best practice. However, it is not a requirement for FastAPI or Express.
I have an https website (using LAMP stack) and I want to send an http request to port 3000 of a separate node.js server when you click a button (using an AJAX call and jsonp). It worked when my website was not secured (http), but after I switched to using a load balancer to make it secure (I'm using Amazon Lightsail), the http request no longer works. Is this because an https website does not allow http requests since all information on the website is supposed to be secure? And if so, should I send an https request instead? This would require me to make the node.js server https-secured by adding it to the load balancer. However, would this prevent me from requesting to port 3000 since load balancers only accept requests to ports 80 (http) and 443 (https)? I've looked into listeners but it seems like Amazon Lightsail does not support listeners with its load balancers.
Put that node server behind the same load balancer as a reverse proxy with another route or dns and it will probably work for you.
So I've already done my research and figured out that socket.io only works with cloudflare if you use set ports found that here
So through that research I found that http and https can't use the same port. I'm coming here to as you guys how do you get a socketio server to listen on two ports? So it can support http and https with cloudflare
The common method is referred to as an SSL Termination Proxy (also called SSL off-loading). The proxy accepts incoming messages over HTTPS and passes the decrypted requests to another resource (another server, web service/API, etc.). This would allow your Node.js application utilizing socketio to handle all requests, no matter if the client made an HTTP or HTTPS request. Software like NGINX, Apache, and even Microsoft IIS are capable of providing this functionality.
Here are some links regarding this topic:
General Info: https://en.wikipedia.org/wiki/TLS_termination_proxy
NGINX: https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
NGINX: https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
HAProxy: https://www.digitalocean.com/community/tutorials/how-to-implement-ssl-termination-with-haproxy-on-ubuntu-14-04
IIS: https://blogs.iis.net/wonyoo/ssl-off-loading-in-application-request-routing
I have a website behind cloudflare. I need to enable websockets over SSL without turning off cloudflare support. I have a PRO plan and hence won't get the new websocket support. I am using Nginx to proxy a SSL connection to a web socket running on a node server. Now, I read somewhere that cloudflare could work with approved ports would support websockets. Hence, I'm using 8443 for the Nginx port and another port for the node server. Using wscat it returns a 200 error.
$ wscat -c wss://xyz.com:8443
error: Error: unexpected server response (200)
I know that the websocket is expecting a 101 code. However, if I visit https://xyz.com:8443, I can see the page displayed by the node server telling me proxy is working. Also, once I turn off cloudflare support, the websocket starts working. Any clues to get this working. I know I can create a subdomain but I'd prefer running the websocket behind cloudflare.
If you're trying to access this through CloudFlare's network you'd need to explicitly have web sockets enabled on your domain before they will work -- regardless of the port. As in, even if the port can pass through our network, that won't automatically mean that web sockets will be enabled or accessible on your domain.
You can try contacting our support team to request an exception to see if they can enable it for your domain, but typically this is still only available at the business and enterprise levels.
Disclaimer: I work at CloudFlare.
I have a Node.js app and I've seen a lot of posts here in SO that it needs to be behind a nginx as load balancer. Since I'm already accustomed to Amazon's services, thus my question.
Yes, but there are a few gotcha to keep in mind:
If you have a single server, ensure you don't return anything except 200 to the page that ELB uses to check health. We had a 301 from our non-www to www site, and that made ELB not send anything to our server because of it.
You'll get the ELB's IP instead of the client's in your logs. There is an ngx_real_ip module, but it takes from config hacking to get it to work.
ELB works great in front of a basic Node.js application. If you want WebSockets, you need to configure it for TCP balancing. TCP balancing doesn't support sticky sessions though, so you get one or the other.