I have a GCP VM instance running a NodeJS server and it has a Nginx reverse proxy configured that allows me to connect with the NodeJS server over HTTP. The server is also accessible through a domain name (The Domain was purchase from Google Domains and I did not explicitly buy a SSL certificate)
I want to configure HTTPS on this VM instance.
I tried to use certbot and follow the instructions here https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx
but I still cannot connect to my NodeJS server over HTTPS.
Please note: HTTP traffic works fine when connecting through IP and domain name.
Fixed this.
Turns out, that the firewall was blocking connections to port 443.
For readers:
On GCP VM make sure firewalls are configured correctly at 3 places.
GCP Networking Firewall should be configured to allows http/https/SSH/etc
Your VM should be set with proper GCP Firewall tags so that your GCP Firewall configuration is applied to your VM.
Your OS Firewall should be configured to allow the traffic you want.
Related
I have a server running on ubuntu installed on azure vm. I am able to access the server via http, however when I am trying to connect via https, I am getting error: ERR_CONNECTION_REFUSED
I have added inbound rule to allow port 443, but still I am facing the issue.
To secure the websites on azure VM, you need to inject the SSL certificate into the VM and configure your web server with a TLS binding. Read this Tutorial: Secure a web server on a Linux virtual machine in Azure with TLS/SSL certificates stored in Key Vault. Also, you need ensure the port 443 is allowed in the inbound firewall of Azure VM.
When you have done it, you can check if the port 443 is listening on the Azure linux VM via netstat -tulpn | grep LISTEN, see How to check if port is in use on Linux or Unix
Let me know if you have any question.
Say, I have two app service (HTTPS only is enabled):
https://myapp1.azurewebsites.net
https://myapp2.azurewebsites.net
I can call both app service endpoints using HTTPS successfully.
Then I created a traffic manager and add above two endpoints to traffic manager, say:
myapps.trafficmanager.net
After the traffic manager is created and endpoint added, the trafficmanger host name myapps.trafficmanager.net is also automatically added into custom domains of two app services. But without SSL binding to traffic manager host name.
Then if I call traffic manager endpoint using HTTPS: https://myapps.trafficmanager.net, I will got untrusted SSL cert error/warning. That is expected.
Since traffic manager just works on DNS level, the real request is actually send to the app service endpoint which has correct SSL cert binding. My question is:
From security point of view, is it safe to call the non-cert binding traffic manager endpopint using HTTPS in my code (say, using .NET HttpClient) but just ignore the cert error?
I recently set one of these up as well and fought with it for a bit. The short answer is that it is probably safe, but it sounds like you may be using the Traffic Manager incorrectly. You shouldn't be using the URL in the Traffic Manager as your end point if you want to use SSL. Instead configure your vanity domain name, mycoolsite.com to point to myapps.trafficmanager.net, using a DNS CNAME record.
If you want to use SSL and a single URL you should configure the custom URL and install an SSL cert at the service level. It should be same custom URL on both app services. This must be configured at in the app service, not in Traffic Manager.
I had to read this a few times to understand how it works under the hood, but it was helpful.
So in summary, to set it up properly, the steps would be:
Configure custom/vanity domain on both app services
Install the SSL cert on both app services
Setup and configure the Traffic Manager
Point the custom/vanity URL to the traffic manager using a DNS CNAME record
There is no need to bind a cert with traffic manager since the server certificate is not validated when using traffic manager health probes via HTTPS. Moreover, the traffic manager works at the DNS level. The clients connect directly to the selected endpoint, not through Traffic Manager.
In this case, you could use HTTPS for endpoints and use health probe via HTTPS. Even you could not bind a cert with traffic manager, you could make sure that the monitoring port is configured correctly in Traffic Manager (e.g. 443 instead of 80) and also your monitoring path points to a valid page for your service.
Another SO answer explains this more details. If you still want to make this warning disappearing, you can get a free SSL from letsencrypt.org and add that to your custom domain with the *.trafficmanager.net.
I am trying for the simplest deploy to get an https web server up and running in Fargate.
I have used Amazon Certificate Manager to create a public certificate.
I have an Application Load Balancer that is talking to the Fargate container on two ports:
80 for http and
443 for https
This is the problem: when I run my webserver on port 80 (http) and connect via the ALB, it works fine (not secure, but it serves up the html).
When I run my webserver on port 443 with TLS enabled, it does not connect via the ALB.
Another point is that when running my webserver with TLS enabled on port 443, I do not have the certificate or certificate key, and so am confused how to get that from Amazon.
Another question I have is: does it make sense for me to say that the ELB will communicate with the client over HTTPS but that the ELB can communicate with the container via HTTP? Is this secure?
My networking knowledge is very rusty.
does it make sense for me to say that the ELB will communicate with the client over HTTPS but that the ELB can communicate with the container via HTTP?
Yes. You should make sure your web server is accepting traffic from the ALB on port 80. This is done at the application level, on the web server, and with your target group, which is what the ALB will use to determine how it routes traffic to your web server. This is way it typically works:
client --(443)--> ALB --(80)--> web server
Some things to check:
Target group is configured to send traffic to your FG web server on port 80
Target group health check is configured to check port 80
FG task security group has ingress from ALB on port 80
Web server is configured to listen on port 80
Sidenote: You can configure your target group to send traffic to the target (web server in Fargate) on 443, but as you said, without the proper certificate setup in the container, you won't be able to properly terminate SSL and it just wouldn't work. You would need to upload your own cert to ACM for this to work, which sends you down a security rabbit hole, namely how to avoid baking your private key into your Docker image.
I have a problem with my windows azure virtual machine.
I need to open the Port 443 (HTTPS) on the VM.
In the Endpoint Config. of the virtual machine, I opened it and configure the ACL with the following parameters:
Permit
0.0.0.0/0
It's a Windows Server 2012 VM and I created the firewall rule for the public Connection.
A Port Check from ping.eu shows that the port 443 is closed.
The Location of the virtual machine is Western Europe.
I hope, you can help me.
Kind Regards
Sebastian
I also had this issue and it was very annoying! I thought at first I was not setting up the SSL bindings correctly or that it was a certificate issue, then moved on to firewall issues. In the end it was the Azure endpoint at fault.
I had added the 443 endpoint, disabled local firewall and got nothing nothing. I got suspicious when I added a new endpoint on 8080, bound to https and it worked fine.
I deleted the 443 endpoint, shut the Azure VM down from the webinterface after shutting down the client. Created a new 443 endpoint and restarted the VM (I had already tried restarting my win2012r2 vm). It worked.
It must be a glitch in the networking stack of azure endpoints. You are not going mad!!
Hope that helps!
P
Did you also configure the Endpoint Configuration through the web management portal to forward connections from the ext->int ports ?
Anything you change on the Win2k12 Virtual machine will just affect the VM itself. i.e. opening 443 in the firewall, or configuring routes etc...
But you also need to allow a connection forward from the cloudapp.net public IP address to the internal IP of the box. See the below screenshot.
Another gotcha. In addition to setting up the Endpoint configuration, you need to enable IP Forwarding. This is disabled by default.
IP Forwarding can be found in the IP Configuration settings of the network interface.
I have setup an http endpoint (port 80) for my Azure VM. I have verified that the firewall is allowing port 80 both in and out. (My VM operating system is Windows Server 2012.)
Yet still, I am unable to hit IIS on port 80 from a remote machine. (Locally I can hit localhost just fine.)
So I'm wondering if what I'm missing is a network acl. However, the Azure documentation (as of 12/2/2013) seems contradictory:
When a virtual machine is created, a default ACL is put in place to
block all incoming traffic. However, if an endpoint is created for
(port 3389), then the default ACL is modified to allow all inbound
traffic for that endpoint.
Yet below it says:
It’s important to note that by default, when an endpoint is created,
all traffic is denied to the endpoint.
Which is correct? Do I need to create an allow all ACL? Am I missing something else about how Azure DNS and network traffic works?
That same page follows on to write
No ACL – By default when an endpoint is created, we permit all for the
endpoint.
I believe that the comment suggesting all traffic is denied by default is wrong.
To confirm I have just deployed a brand new Windows Server 2012 Data Centre VM, installed IIS, open the Windows Firewall and configured an endpoint for TCP port 80 and it all worked just fine although its worth pointing out that it took a few minutes between configuring the endpoint and being able to browse to the server.