How is my https:443 server serving http:80 also? - node.js

I have a server EC2 instance running in AWS, behind a load balancer which currently doesn’t really do anything since I only have one instance (eventually, I planned on using it to scale and distribute traffic among multiple instances). I’m using Rt53 to point my domain name to the load balancer.
The webserver on the instance uses node(js) and express to serve the site over port 443 (https) with the proper certificates loaded in for encryption/identity/etc, generated by certbot using Let’s Encrypt.
The load balancer is configured like so:
load balancer general config
load balancer target config
So for both ports the load balancer points to the same server, using HTTPS:443, which I figured would force all connections to be encrypted. However, when I type in my URL as http://mydomain.tld it takes me to the webserver with no indication that it’s an https connection.
How is this happening? My nodejs server’s not set up to do anything over port 80, and I thought the load balancer should route all connections to port 443.

80 is the default port for the World Wide web. If you type in google.com:80, it will send you to google normally, while if you try google.com:81, you will not connect.

If you disable 80 port and somebody type http://abc it will show error the best way is to redirect 80 requests to 443
create a redirection from 80 to 443.
app.use(function(request, response){
if(!request.secure){
response.redirect("https://" + request.headers.host + request.url);
}
});
Generally most web server has multiple binding 80 and 443 both.Since if certificate expires you can use 80.
There are several methods of enabling an Apache redirect http to https:
Enable the redirect in the Virtual Host file for the necessary domain.
Enable it in the . htaccess file (previously created in the web root folder).
Use the mod_rewrite rule in the Virtual Host file.
Use it in the . htaccess file to force HTTPS.
https://developer.ibm.com/technologies/node-js/tutorials/make-https-the-defacto-standard/

So if traffic is being forwarded to the same target group that means the same server port will be used for forwarded traffic from the load balancer (ALB).
Requests get mapped to this from the listener and translated to the port mapping for the target group instead.
Therefore, there are two possible practical scenarios that result from this configuration:
client--[HTTP:80]-->ALB--[HTTPS:443]-->EC2
client--[HTTPS:443]-->ALB--[HTTPS:443]-->EC2

Related

Unsure how to change Root Directory to point away from my HTTP sites content

I am currently SSH'd into my AWS VM IP address on Ubuntu.
I've installed the Apache SSL module, copied my server certificate and private key to /etc/pki/tls/certs and /etc/pki/tls/private. Changed the configuration within /etc/httpd/conf.d/ssl.conf so that it would be listening for port 4443.
From here, I need to change the document root to something different than my nginx HTTP site or else both HTTPS and HTTP will point to the same content.
I was told to use independent directory trees but unsure how to set it up.
I attempted by going to /etc/httpd/conf/httpd.conf and changed the document root to a directory I setup to separate them within /etc/ but still gives me the same message when trying to access the website as shown in the screenshot.
url of test page HTTPS
test page HTTPS
Does your site show up if you add the port? For example, https://yoursite.com:4443. Port 4443 isn't the default https port (that's 443), so you'll need to reference it explicitly.
You might want to, instead, consider using an ALB in front of the EC2 instance and terminate SSL there, leaving the httpd/nginx server on the EC2 instance only running on port 80 (default). This offloads the SSL handling to the load balancer and also enables you to do things like rolling upgrades to a new EC2 instance instead of keeping a "pet" web server.

Caddy multi-domain reverse proxy

I'm new to Caddy server but their website looked promising. I want to use it as a reverse proxy for the websites that are hosted on other servers. So, I have 2 websites; a Wiki and a photo gallery, that needs to be hosted outside of my local network.
Caddyfile
My Caddyfile is pretty straight forward:
coppery.<my domain name> {
proxy / http://192.168.1.66:80 {
transparent
}
}
wiki.<my domain name> {
proxy / http://192.168.1.88:8080 {
transparent
}
}
When I first started caddy I saw some HTTPS stuff with lets encrypt but that was succesful so now when I start it I get this output:
root#caddy:~# caddy
Activating privacy features... done.
Serving HTTPS on port 443
https://coppery.<my domain name>
https://wiki.<my domain name>
Serving HTTP on port 80
http://coppery.<my domain name>
http://wiki.<my domain name>
WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with `ulimit -n 8192`.
I think for now I can dismiss the warning, I might solve that in the future but this is not a production environment anyway.
Portforwarding and DNS
I configured the domain names to resolve to my IP address (this already worked) and when I ping the domain names, they resolve the IP address correctly.
When I access the IP-addresses directly from my local network it works, I get the websites I expect. So I added some configuration on my router and port forwarded port 80 and 443 to the local IP address of the machine hosting the Caddy server.
Now when I try to access coppery.<my domain name> on either HTTP or HTTPS it's not showing anything.
So my only guess is that there is something wrong with the Caddyfile configuration but it's a realy simple case and all I've done is using the examples I found online. Which don't seem to work.
So the question is: What am I missing to make this work as intended?
The problem was the DNS. Once I configured the domain names in my local host file it worked. So the configuration in my question is all correct.

Make a subdomain to point on a specific port

I already checked some topic about it but didn't find any solutions (if it's possible). I have a domain that points on my server on the port 80, but, I have another important webservice running on the port 8080.
I want to know if it's possible to create a subdomain like (admin.example.com) which points on port 8080.
Thanks
The simple answer is no. The server name is resolved by a DNS query to a single IP, to which port the connection is made is between the application and the server. For HTTP the conventional default port is 80 and HTTPS 443, if you need to use another port, you need to include it in your URL.
SRV entries in a DNS record can be used so resolve a hostname to a specific port, but this works reliably only for a handful of protocols that mandate its use.
Currently the preferable way is to set up your server with a reverse proxy to direct traffic by a specific server name (your subdomain, carried in the request headers) to your admin service. This is quite easily done using e.g. nginx.

AWS Loadbalancer Proxy for Nodejs

I have configured the load balancer to route the request to two of Ec2 Instance running a NodeJs server. I need to direct the request coming from both http (port 80) and https (port 443) to http (port 80) of the EC2 instances in NodeJs. I have uploaded the ssl certificate to AWS and configured the load balancer to use ssl certificate. The problem is the request coming from http port doesn't automatically route to https. It has to be a server side script or snipped which I need to write in server.js which should be routing the http to https, i tried to do it and it run into endless redirection. So questions -
Is there any guide to do this from AWS ?
If not then how one can achieve this, any pointers or suggestions would be greatly appreciated.
On the server side you can check the X-Forwarded-Proto
(original request protocol) and if it's heaving value http you can send redirect (http 302) to a url with https protocol..
though with ALB (application load balancer you may specify a set of rules, maybe it's possible to do that there..)
I couldn't find a guide from AWS, but I will keep searching and update the answer in the case I find it.
Usually, when you write applications in Node.js, you specify which port should your app run at. It means that you will need two different servers listening. And when your app receives a request on port 80 (HTTP), it should redirect to your HTTPS server, like in this answer.
Another point that may be relevant to your question is that, in production environments, you don't usually bind a port to your Node.js server, since it's not production ready. You probably want to use a reverse proxy and load balancer like Nginx or HAProxy.
If you are using the AWS ALB (Application Load Balancer) they announced the http->https redirect today. Take a look: https://exampleloadbalancer.com/redirect_demo.html
Put your ELB behind the Cloudfront and in settings of your distribution select forward HTTP to HTTPS.
The following doc will be helpful
https://docs.aws.amazon.com/waf/latest/developerguide/tutorials-ddos-cross-service-ELB.html
This method has two benefit:
1-Your problem will be solve
2-You can use the benefit of the powerful CDN, for more information about Cloudfront read https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Update:
You can forward traffic from HTTP to HTTPS by edit your Listeners setting in your ELB.

Azure load balancer identify server

we have two VM (classic) in an availability set and via end points defined load balancer on port 80.
Each VM has 5 websites running and the load balancer distributes calls between server 1 and server 2
When I call one of the websites, e.g. www.mysite.com is it possible somehow to identify which server has served the request?
Is it possible to force load balancer to ping a specific server? This can be super useful when we deploy a new version of the website e.g. on Server 1 and we want to test does it work on Server 1
thanks
I would include an header "Server :" in my response : rfc2616-sec14.. Then you are able to check it via Chrome Developer Tools (Network) or any other similar tool.
At the end, I've figured out how this can be done. Hopefully, my answer will be useful to someone with a similar problem
-Identifying servers, as ArneRie has suggested, can be done by adding a custom HTTP header response - https://technet.microsoft.com/en-us/library/cc753133(v=ws.10).aspx
Call specific server
In the Azure portal create a new endpoint, e.g. 802 ->8181.
RDC to the server open port 8181
Install port forwarding utility https://sourceforge.net/projects/pjs-passport/
Set a new rule: all traffic coming from 8181 should be redirected to 80 (as the source port use internal server IP)
Call www.mysite.com:802 and it will be served from the specific server
Repeat steps 1-4 with another server just use a different port e.g. 801, 803...

Resources