My ubuntu server work correctly in port 80 using nginx, it's finally switch to port:3000 for Nodejs app to run. Everything okay when i pass the dns to the browser but when I try to connect with cloudflare It's appear the 502 bad gateway code when access the domain name? I'm kind of new in cdn hosting please tell me what to do! Many thanks
My Cloudflare Setup
Assuming you are running your webservice on port 80 publicly available:
What you could do is to disable the encryption between Cloudflare and your origin (not recommended):
Select your Domain, go to SSL/TLS -> Overview. Select "Off (not secure)"
But you really shouldn't do this for a production environment.
Your nginx should support encrypted traffic over HTTPS.
Issue a selfsigned certificate (not recommended), have a look at certbot or better:
Issue a Cloudflare Origin Certificate (SSL/TLS -> Origin Server)
Related
I have read many similar questions and found numerous articles elsewhere but I'm still unsure how to solve this.
What I'm trying to achieve:
Set my node app on AWS EC2 up to be able to communicate on HTTPS for free or at the lowest cost possible, while still being production ready.
What I have done:
Added inbound rules on my EC2 instance to accepts all traffic
on HTTP and HTTPS and additionally added a rule for HTTPS on PORT 443
specifically.
Set my node app to listen on port 443.
Most articles I have read recommend setting up a reverse proxy server using NGINX and a custom domain with an SSL certificate.
This leads me to the following questions:
Do I need a custom domain for my backend, for it to communicate on HTTPS?
If yes, can I use my Firebase free domain or a subdomain of it? E.g. https://myapp.firebaseapp.com/ or https://api.myapp.firebaseapp.com/
If yes and no, and I buy a custom domain, can I use mydomain.com for my frontend and api.mydomain.com for my backend - can this be done using the same SSL certificate?
Do I need a reverse proxy server?
I have a GCP VM instance running a NodeJS server and it has a Nginx reverse proxy configured that allows me to connect with the NodeJS server over HTTP. The server is also accessible through a domain name (The Domain was purchase from Google Domains and I did not explicitly buy a SSL certificate)
I want to configure HTTPS on this VM instance.
I tried to use certbot and follow the instructions here https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx
but I still cannot connect to my NodeJS server over HTTPS.
Please note: HTTP traffic works fine when connecting through IP and domain name.
Fixed this.
Turns out, that the firewall was blocking connections to port 443.
For readers:
On GCP VM make sure firewalls are configured correctly at 3 places.
GCP Networking Firewall should be configured to allows http/https/SSH/etc
Your VM should be set with proper GCP Firewall tags so that your GCP Firewall configuration is applied to your VM.
Your OS Firewall should be configured to allow the traffic you want.
I'm struggling with AWS to enable https systematically. I requested a certificate through the certificate manager, and then have the ELB and Security Group listen to HTTPS and port 443.
But I also need my server on the AWS instance to listen to https request on the 443 port, right? My server is running with NodeJS and Express. From what I understood, I'd need to have a certificate (.crt) file and key to do it correctly, but I didn't find out how to download them from AWS Certificate Manager.
Did anyone faced this problem before? Thanks all!
I also need my server on the AWS instance to listen to https request
on the 443 port, right?
Nope, you enable the certificate on the ELB. SSL termination happens on the ELB, and communication between the ELB and your NodeJS server occurs over HTTP inside your VPC. The ELB will send a special HTTP header X-Forwarded-Proto to your NodeJS server, which you can check if you need to know if the connection between the ELB and the client is over HTTP or HTTPS.
You can't download certificates generated by Amazon's ACM service. You can only use them via Load Balancers or CloudFront distributions.
No, You cannot download the certificate, instead of that you can configure your Apache. for configuring https open /etc/apache2/sites-available/default-ssl.conf
and add this lines to that file.
<Location /subDomain>
ProxyPass http://localhost:port
ProxyPassReverse http://localhost:port
</Location>
after adding restart your apache.
And open the browser and check https://yourdns.com/subDomain
My application's ssl certificate (running in AWS ECS) expired 2 days ago. Because that certificate is managed by ACM and can not be downloaded and installed manually. I do the following to renew it:
In ACM, submit the request for renew the certificate, (need to
provide email or DNS provider to verify this domain is owned by you.
) I used email way. After validation, the certificate renew request
's status changed from 'pending' to 'issued'.
There is no place to download the certificate, as above answer, we
need to use ELB or other service to install that certificate. In aws
console, EC2 => Load balancer => View/edit certificates => add the
certificates created for that. => done
I have a website mydomain.com with the DNS configured through Cloudflare. I am in the process of setting up an API accessible through api.mydomain.com
The servers I use are hosted on Digital Ocean, but I would like to use some of the features of the Amazon API Gateway Interface (I will later be migrating all servers over to Amazon). The API server is the same as the website (again this will later be separated, but for now the effective A record is the same Digital Ocean node). The API Gateway Interface is configured and I can access it just fine through the provided endpoint someamazonendpointurl.com/stage
On Amazon I have created a Cloudflare distribution with origin api.mydomain.com. It has some basic HTTP to HTTPS behaviours along with query string parameters. I then set a CNAME record on Cloudflare to point to the endpoint URL. When I try and access api.mydomain.com though I get the Chrome error:
ERR_TOO_MANY_REDIRECTS
Does anyone have any idea what I might have misconfigured. I realise this is a bit of an odd setup, but it is a stop-gap while we migrate our servers over to amazon.
UPDATE
I noticed I had a CNAME record in cloudfront to api.mydomain.com. I've now removed this but get:
ERROR
The request could not be satisfied.
Bad request.
Generated by cloudfront (CloudFront)
Request ID: <id>
Most likely you have your SSL mode on Cloudflare set to "Flexible", which doesn't use https to connect to the origin server. API gateway tries to redirect non-secure requests, so you have a redirect loop.
Set your SSL mode to "Full" and you should be good to go! You can do this on the "Crypto" tab of the Cloudflare dashboard.
I'm building an HTTPS proxy in node. Basically I'm allowing people to set a DNS CNAME alias to my proxy machine (which has a wildcard DNS setupped), and import their SSL certificate into my application (like AWS Elastic Load Balancer does) so that their CNAME hostname is properly protected and recognized by the client on every request.
Now I'm working on the proxy side, and I'm trying to find a way to load the right certificate dynamically before the SSL handshake with the client. The workflow is:
A new request is received by the server
Get the hostname requested by the client (that is the DNS CNAME alias set by the user)
Load the right certificate belonging to that hostname
Use the loaded certificate in the current request (need help here)
Handshake (with the loaded certificate - which varies from request to request)
Is there a way to do that?
Here we go: using SNI in node should make it work.
The problem is that not all the clients (browsers or libraries) support it yet.