I am using the following server script to run both http, https servers and redirect all http requests to https.
When I access the server both locally and remotely from IP addresses, the requests redirect to https and api works with an unsecure warning.
But when I access the same routes via domain, I get "Site cannot be Reached" error.
Although, accessing http://example.com/test-route redirects to https://example.com/test-route, I am still getting Site can't be reached error.
import http from 'http';
import https from 'https';
import redirectHttps from 'redirect-https';
import greenlock from 'greenlock';
import app from '../app';
var le = greenlock.create({
server: 'staging', // using https://acme-v01.api.letsencrypt.org/directory in prod
configDir: 'certs',
approveDomains: (opts, certs, cb) => {
if (certs) {
opts.domains = ['example.com']
} else {
opts.email = 'me#mymail.com',
opts.agreeTos = true;
}
cb(null, {
options: opts,
certs: certs
});
},
});
http.createServer(le.middleware(redirectHttps())).listen(80, function() {
console.log("Server Running On http # port " + 80);
});
https.createServer(le.httpsOptions, le.middleware(app)).listen(443, function() {
console.log("Server Running On https # port " + 443);
});
There's a number of reasons that this could be happening, and a lot has been updated in the library since you posted this question.
I've spent a lot of time recently updating the documentation and examples:
https://git.coolaj86.com/coolaj86/greenlock-express.js
I'd suggest taking a look at the video tutorial:
https://youtu.be/e8vaR4CEZ5s
And check each of the items in the troubleshooting section. For reference:
What if the example didn't work?
Double check the following:
Public Facing IP for http-01 challenges
Are you running this as a public-facing webserver (good)? or localhost (bad)?
Does ifconfig show a public address (good)? or a private one - 10.x, 192.168.x, etc (bad)?
If you're on a non-public server, are you using the dns-01 challenge?
correct ACME version
Let's Encrypt v2 (ACME v2) must use version: 'draft-11'
Let's Encrypt v1 must use version: 'v01'
valid email
You MUST set email to a valid address
MX records must validate (dig MX example.com for 'john#example.com')
valid DNS records
You MUST set approveDomains to real domains
Must have public DNS records (test with dig +trace A example.com; dig +trace www.example.com for [ 'example.com', 'www.example.com' ])
write access
You MUST set configDir to a writeable location (test with touch ~/acme/etc/tmp.tmp)
port binding privileges
You MUST be able to bind to ports 80 and 443
You can do this via sudo or setcap
API limits
You MUST NOT exceed the API usage limits per domain, certificate, IP address, etc
Red Lock, Untrusted
You MUST change the server value in production
Shorten the 'acme-staging-v02' part of the server URL to 'acme-v02'
Please post an issue at the repository if you're still having trouble and I'll do my best to help you sort things out. Make sure to upgrade to the latest version because it has better debug logging.
Related
I've noticed that if I use resolver = new dns.Resolver(), by default it will set 127.0.0.1 as the default server list.
Now if I give it an empty list, and attempt to resolve resolve.resolve4('cloudflare.com') it will fail with ESERVFAIL.
But if I give it a list with a useless IP 87.78.87.78, and attempt to resolve, it still works and gives me the same IP addresses.
How is this possible if 87.78.87.78 does not have a DNS server?
Either there's a hidden fallback to 127.0.0.1 OR there's DNS caching going on. But there's no mention of DNS caching in the nodejs docs, especially for custom resolvers which I believe uses the c-ares library.
After some testing with wireshark, I can see that there's a "cache" fallback. Wireshark shows that 87.78.87.78 is responding, even though there's no DNS server on that random address.
The cache only works, if I had also previously visited that domain, say on a web browser.
Example script:
import dns from 'dns/promises';
async function main(){
// The default is `127.0.0.1`
// Then it's using the default timeout
// and the default tries
const resolve = new dns.Resolver({
timeout: 100,
tries: 4
});
resolve.setServers([
'87.78.87.78',
// '127.0.0.1'
]);
const addresses = await resolve.resolve4('dogs.com');
console.log(addresses);
// oh the localhost resolver
console.log(resolve.getServers());
}
main();
I have developed server application in node.js. Right now I access the application using 128.1.1.5:3000. But i want to use a domain name like abc.net to access the application. How can I do this. Please suggest.
To configure DNS on your local app,you need to do following configuration.
Make an entry of this DNS example abc.net as a host instead of local host while setting up your node server where you are mentioning the localhost host and port detail eg. in app.js file.
Example
const http = require('http');
const hostname = 'abc.net';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
Now open command prompt and type
ipconfig -all
It will list all your IPs. Select the ip of your machine which is preferred one.mostly you can locate this ip by finding the ip which is followed by (preferred) keyword in command prompt.
Now copy this IP address and make an entry of this in system host file.Make sure you have an admin rights to make changes in this file.
Path of host file in Windows
C:\Windows\System32\drivers\etc\hosts
Edit this file and scroll to the end and press Enter to copy the ip address corresponding to the DNS which you have configured in node js application as shown below in new line.
IPaddress(fetched in step 3)
abc.net
i.e ipaddress then give space then dns name
Save the file.
Start your node application.
Now try hitting your api from the url abc.net:port/api
You will need a domain that you can edit the DNS settings on and add an A record that is configured to your server's external IP address
then you can access your domain with the port attached
example: mydomain.com:5000
you should refer to your domain record provider's documentation on how to do this.
Beyond that, you may encounter firewall settings, port settings, and possible HTTPS certificates issues but these are separate topics each.
I'm trying to understand how to use redbird as a reverse proxy (seems less work than nginx for a legacy node server but maybe I'm wrong) and I'm failing to understand their example in the readme file, can't find the answer elsewhere: https://github.com/OptimalBits/redbird
My setup: I have a node server running under "example.com" and I need to create a sub-domain (api.example.com). Right now the legacy app installed on the server redirects all trafic from port 80 to 443 and has a SSL cert installed (not LetsEncrypt as I said this is legacy and probably before getting a cert was free). This certificate only covers "example.com" and "www.example.com".
After a few hours (ok, days) of trying to find the best way of adding a sub-domain (to be served through HTTPS too), this is how I thought it would work:
Add the A-record for my sub-domain (api.example.com);
Get a certificate for api.example.com from Let's Encrypt;
Change the legacy app so it doesn't listen to port 80 (for the
redirect)
Change the legacy app so it listens to 7000 instead of 443
Put my new app online and make it listen to 7001
Setup Redbird to listen to 80 and redirect to 443
Setup Redbird to register "example.com" and redirect requests to
7000 (with the existing cert)
Setup Redbird to register "api.example.com" and redirect requests to
7001 (with my new cert)
Am I on the right path?
Note: I know redbird has a feature for getting ssl certificates automatically but since I have to use the legacy certificate for the main domain (example.com) I figured I couldn't use the automagical way.
I'm able to go through the list down to #6 but then the redbird documentation for HTTPS is confusing to me. Here's their example:
var redbird = new require('redbird')({
port: 8080, //??? => so this is the entry point? Why not 443 since we want only HTTPS?
// Specify filenames to default SSL certificates (in case SNI is not supported by the
// user's browser)
ssl: {
port: 8443, //??? => what is this port for? Is it our default HTTPS port (in my case 443?)
key: "certs/dev-key.pem",
cert: "certs/dev-cert.pem",
}
});
// Since we will only have one https host, we dont need to specify additional certificates.
redbird.register('localhost', 'http://localhost:8082', {ssl: true}); //??? => this is the port my request will be forwarded too... right?
What I think I gather from this is : traffic coming to localhost through port 8080 is redirected to port 8082. Right? But then what is 8443 for?
Barely understanding what's going on I tried the below:
var redbird = new require('redbird')({
port: 80,
secure: false,
// Specify filenames to default SSL certificates (in case SNI is not supported by the
// user's browser)
ssl: {
port: 443,
key: "/etc/cert/example.key",
cert: "/etc/cert/example.crt",
}
});
// Since we will only have one https host, we dont need to specify additional certificates.
redbird.register('example.com', 'http://localhost:7000', {ssl: true});
redbird.register('api.example.com', 'http://localhost:7001', {
ssl: {
key: "/etc/letsencrypt/api.example.key",
cert: "/etc/letsencrypt/api.example.crt"
}
});
Why don't we have HTTPS instead of HTTP in the second argument of redbird.register() ?
Needless to say the above does not work and when I open example.com from my browser or api.example.com, it responds : "ECONNRESET".
UPDATE: I was serving both node apps with HTTPS (on 7000 and 7001), and tried serving them to the proxy as HTTP. I got the proxy correctly forwarding the requests to the corresponding ports, BUT only the main (legacy) app (at "example.com") has the right SSL certificate. When I open "api.example.com" I get the warning saying the site is not secure...
Makes me wonder: is it ok to have a main domain with a certificate from say GoDaddy and a subdomain from LetsEncrypt? Is that the reason it's not working?
When I click in chrome at the "not secure" warning (when looking at api.example.com), it says Cetificate (invaild) and it show the certificate information for the main domain (example.com) instead of the certificat I configured for api.example.com...
UPDATE2:
So I tried with getting a new certificate for my domain and all its subdomains (using the wildcard *.example.com and also noteworthy: *.example.com does not include example.com so it needs to be added manually).
With the all-encapsulating certificate, the below code works but it a lot slower than without Redbird as a reverse proxy (I was expecting a difference but not that much, here we're talking about over 3 seconds difference - and the site being legacy and not optimized, those 3 seconds are on top of an excruciating 8 sec+ with cache disabled).
var redbird = new require('redbird')({
port: 80,
secure: true,
ssl: {
port: 443,
key: "/etc/letsencrypt/live/example.com/privkey.pem",
cert: "/etc/letsencrypt/live/example.com/cert.pem",
}
});
redbird.register('example.com', 'http://localhost:7000', {ssl: true});
redbird.register('www.example.com', 'http://localhost:7000', {ssl: true});
redbird.register('api.example.com', 'http://localhost:7001', {ssl: true});
Here are a few things I can deduce from this experience and it might help others:
the port specified where 80 is in the above is the entry port for HTTP. If you specify in the same object the ssl part, redbird will redirect all HTTP traffic to HTTPS (I couldn't find that in the documentation).
the port specified where 443 is in the above is the entry port for HTTPS. You still have done no redirection at this point, just kind of explaining to redbrid what's what. I think.
the cert in the ssl object should be the file cert.pem not fullchain.pem (this isn't necessarily a redbird issue but just one of those things that could trip people up).
the port where 7000 and 7001 are above are the ports where you want to redirect your traffic to (kind of obvious but documentation should have obvious stuff).
Finally, what the object {ssl: true} does is tell redbird to use the default SSL config above, an alternative would be to specify another config per registered domain but I was not able to make that work (might be because I was only dealing with sub-domains and this feature might only be for non-sub-domains... if that's the case it would have been good to have in the docs).
So this is disappointing because (by order of most important to least):
There seems to be a huge hit on perf: am I missing a "prod" parameter that could improve this?
It seems the only way this works is by having all my node apps/servers as http (not https) and pointing to their ports (7000 and 7001) and only have the proxy server be https. It's not too troubling since no one seems to be able to get to the ports 7000 and 7001 directly, but I'm thinking 7000 and 7001 could have been https too. Can they be? Or does it not make sense to have these apps in https if the proxy can handle that?
I was not able to keep the old (and expensive) ssl certificate that is not yet expired. Or is there a way that I didn't find?
I bet this is gotten so long no one will ever read this...
I hope this helps;
use redbirds built in ssl option which delivers certificates automatically
example:
const proxy = require('redbird')({
port: 80,
xfwd: true, // http port is needed for LetsEncrypt challenge during request / renewal. Also enables automatic http->https redirection for registered https routes.
letsencrypt: {
path: __dirname + '/certs',
port: 9999 // redbird gets your certificates throug this port
},
ssl: {
http2: true,
port: 443, // SSL port used to serve registered https routes with LetsEncrypt certificate.
}
});
let connectionInfo = {
ssl: {
letsencrypt: {
email: 'YOUR EMAIL',
production: true,
}
}
};
proxy.register("subdomain.yourdomain.com", "http://your.ip.goes.here:84", connectionInfo);
proxy.register("yourdomain.com", "http://your.ip.goes.here:83", connectionInfo);
run your subdomain server on port 84 and your main server on port 83;
Here's what I'm working with:
NodeJS/Express app
OpenShift environment (Enterprise account)
Works over HTTP
Certificate trust error over HTTPS
Using default wildcard certificate provided by OpenShift
Begins working if I go manually accept the exception the browsers are raising
Latest Express
Server.js looks something like:
var express = require("express"),
app = express(),
IP = process.env.OPENSHIFT_NODEJS_IP || "127.0.0.1",
PORT = process.env.OPENSHIFT_NODEJS_PORT || 8888; // its 8080 on openshift. i use 8888 only on my local environment for irrelevant reasons
// we sometimes need special endpoints that arent files
app.get("/something-special", function(req, res) {
res.send("something special");
});
// but typically it's static files
app.use(express.static(__dirname + "/public"));
// go!
app.listen(PORT, IP);
When I go to https://myserver/file.js (which lives in /public/file.js), I get an error saying the certificate is not trusted.
I dont much understand certificates, and I barely know Node. This is a learning project so I'm trying to work through all of the issues I come across without changing course.
I've tried everything I can think of, including:
app.enable('trust store') recommended on a different SO
simplifying my Node app and using req.secure to force HTTPS
You might try visiting your app using the https://appname-yourdomainname.rhcloud.com/ version of the URL. The underlying digital certificate is *.rhcloud.com and was issued by "Geotrust SSL CA" for what it's worth. If you do it this way you don't get certificate-related errors because they applied a wildcard-based cert to the servers.
I'm not sure that the free version of the hosting allows for private SSLs to be provided/bound... Yeah, you need Bronze or better to allow a private SSL for your application. Bummer
More than likely what is happening is that you are trying to use the *.rhcloud.com wildcard ssl certificate with your custom domain, and that won't work. OpenShift supplies you with an ssl certificate that matches your app-domain.rhcloud.com address. if you want to use SSL correctly with your custom domain, then you need to acquire (or purchase) a custom ssl certificate for your domain name. You can get one at lots of companies online, or you can get a free on here: https://www.startssl.com
Also, the SSL is terminated on the proxy, before it gets to your gear. Check out this developer center article for more information about how it all works: https://developers.openshift.com/en/managing-port-binding-routing.html
I'm running a node app on an Amazon EC2. The app includes a simple web server intended to serve the index page, but it doesn't work.
Here's the server code:
var http = require('http'),
fs = require('fs'),
io = require('socket.io'),
index;
fs.readFile('client.html', function(err, data){
if (err){
throw err;
}
index = data;
});
var server = http.createServer(function(request, response){
response.writeHeader(200,{"Content-Type": "text/html"});
response.write(index);
response.end();
}).listen(1223);
The EC2 is assigned the public IP address 54.187.31.42. I run the app, open my browser, connect to 54.187.31.42:1223 expecting to be served a web page and get nothing.
What is wrong with my server code?
I used the snippet found in this answer to check the IP of the EC2 running the app, and oddly get 172.31.3.67 - why is the returned address different from the one assigned to the machine by Amazon?
Consequently, trying to connect to 172.31.3.67:1223 also fails.
Straight from the Amazon dev controls, if that helps confirm it isn't an issue of the server IP being wrong or something.
The code looks fine, try connecting with the public IP/public DNS that you see in the AWS console.
Try the following and your application would work:
Open the port (in your case 1223) in the security groups of your instance.
stop the firewall on your machine (i.e. iptables) and now access your server using public ip or public DNS.
If you can now acceess your machine that means something in the iptables is filtering your traffic. You can modify the iptables rules accordingly.
In security group, add a rule with the type "Custom TCP Rule" on the used port (e.g. port: 3000 or 1223 for this case). It works for me.