So I've already done my research and figured out that socket.io only works with cloudflare if you use set ports found that here
So through that research I found that http and https can't use the same port. I'm coming here to as you guys how do you get a socketio server to listen on two ports? So it can support http and https with cloudflare
The common method is referred to as an SSL Termination Proxy (also called SSL off-loading). The proxy accepts incoming messages over HTTPS and passes the decrypted requests to another resource (another server, web service/API, etc.). This would allow your Node.js application utilizing socketio to handle all requests, no matter if the client made an HTTP or HTTPS request. Software like NGINX, Apache, and even Microsoft IIS are capable of providing this functionality.
Here are some links regarding this topic:
General Info: https://en.wikipedia.org/wiki/TLS_termination_proxy
NGINX: https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
NGINX: https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
HAProxy: https://www.digitalocean.com/community/tutorials/how-to-implement-ssl-termination-with-haproxy-on-ubuntu-14-04
IIS: https://blogs.iis.net/wonyoo/ssl-off-loading-in-application-request-routing
Related
I am trying to set-up a Messenger Webhook Endpoint. The Webhook is written with node.js.
The Endpoint is configured to listen for requests on its default port or Port 1337 if there is no default.
app.listen(process.env.PORT || 1337)
The issue is that my server cannot validate the Callback URL https://example.com/webhook request coming from Messenger. Here is the screenshot:
How do I re-direct the requests coming over https to the webhook endpoint.
(I am using Apache2 as my HTTP server running on Ubuntu 22.04 LTS)
I am following this documentation for my setup: https://developers.facebook.com/docs/messenger-platform/getting-started/webhook-setup/
This problem can be resolved using SSL Termination with Apache. You basically have it act as a reverse proxy that forwards anything on https://example.com/webhook to http://localhost:1337/webhook.
So there's two parts to this. The NodeJS will listen on localhost:1337 and the Apache translating from public HTTPS and proxying that to NodeJS.
This tutorial by DigitalOcean is very helpful for setting up Webhooks in a Apache Server:
https://www.digitalocean.com/community/tutorials/how-to-use-apache-as-a-reverse-proxy-with-mod_proxy-on-ubuntu-16-04
Also, this ServerFault question helped with configuring VirtualHosts file.
I have a chrome extension which uses an externel socket.io server to connect clients together.
During development I was able to connect to the server via http://localhost:2087 just fine, but right now I need socket.io to work over HTTPS so I can access it from a browser tab being server by HTTPS.
I don't want to deal with certificates, and want to keep the code on the socket.io server mostly the same, so I want to proxy the IP for the server via Cloudflare and establish SSL like that.
But I haven't been able to, the socket.io server uses no other webserver, but I can change it to use the native NodeJS http or https libraries.
But I haven't been able to access the socket.io server via the Cloudflare proxy. Clouflare returns 522 errors, which means a connection timeout.
Apparantly flexible SSL only works with with ports 443->80
Other ports are not supported...
I have configured the load balancer to route the request to two of Ec2 Instance running a NodeJs server. I need to direct the request coming from both http (port 80) and https (port 443) to http (port 80) of the EC2 instances in NodeJs. I have uploaded the ssl certificate to AWS and configured the load balancer to use ssl certificate. The problem is the request coming from http port doesn't automatically route to https. It has to be a server side script or snipped which I need to write in server.js which should be routing the http to https, i tried to do it and it run into endless redirection. So questions -
Is there any guide to do this from AWS ?
If not then how one can achieve this, any pointers or suggestions would be greatly appreciated.
On the server side you can check the X-Forwarded-Proto
(original request protocol) and if it's heaving value http you can send redirect (http 302) to a url with https protocol..
though with ALB (application load balancer you may specify a set of rules, maybe it's possible to do that there..)
I couldn't find a guide from AWS, but I will keep searching and update the answer in the case I find it.
Usually, when you write applications in Node.js, you specify which port should your app run at. It means that you will need two different servers listening. And when your app receives a request on port 80 (HTTP), it should redirect to your HTTPS server, like in this answer.
Another point that may be relevant to your question is that, in production environments, you don't usually bind a port to your Node.js server, since it's not production ready. You probably want to use a reverse proxy and load balancer like Nginx or HAProxy.
If you are using the AWS ALB (Application Load Balancer) they announced the http->https redirect today. Take a look: https://exampleloadbalancer.com/redirect_demo.html
Put your ELB behind the Cloudfront and in settings of your distribution select forward HTTP to HTTPS.
The following doc will be helpful
https://docs.aws.amazon.com/waf/latest/developerguide/tutorials-ddos-cross-service-ELB.html
This method has two benefit:
1-Your problem will be solve
2-You can use the benefit of the powerful CDN, for more information about Cloudfront read https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Update:
You can forward traffic from HTTP to HTTPS by edit your Listeners setting in your ELB.
I am using express with node and nginx as a reverse proxy. I'd like to know how to take advantage of http/2 with nginx to serve static content, with all other requests being forwarded to the express API.
At the moment, my express server is being served via http/1 and nginx is accepting http/2 connections, and forwarding them to express. How do I set up nginx so that it uses http/2 to serve everything in my statics folder, but forwards all requests to the API as http1?
I will break your questions into two parts:
How to take advantages of http/2.0 to serve static files from nginx?
How to setup nginx to send http/1.1 request to the backend server in case where nginx act as a reverse proxy?
Answer 1:
For the case of serving static files the major performance benefit can come from using the multiplexing feature of the http/2.0 protocol.
Multiplexing enhances the pipelining feature introduced in http/1.1 and overcomes the problem of HOL blocking. With multiplexing you can use the same underlying TCP connection to load multiple resources in parallel using one http connection. You should also consider the stream prioritisation to assign priority to the resource which you want to load first on the page otherwise loading of some of the critical resources can be delayed since all the resources will contend for same multiplexed connection.
Answer 2:
Sending http/1.1 request to the backend server is the default behaviour. So if you have already configured nginx to use http/2.0 you do not have to do anything special to proxy http/1.1 request to your backend. This is because nginx does not support http/2.0 in proxy module as of now. Refer to this ticket. Also, please check this digital ocean tutorial which will guide you to setup nginx with http/2.0 configured on ubuntu 16.04.
A web application i developed is sitting on a server that serves it under https, some of my js code requires to open a socket to another server (nodejs) who is currently not set for https. and thus browser wont allow it to run.
all i want is a simple way without getting involved with certificates just to initiate a https socket connection, i don't mind the lack of security,
just need app to run.
The certificates are not your problem. Your problem is CORS. You need to configure your server to answer with a header allows foreign-origin
res.header('Access-Control-Allow-Origin', 'example.com');
because in your case the technical difference between http (port 80) and https (443) is the port.
EDIT: ... I mean from the browsers point of view