SRS how enable HTTPS for HLS stream? - http-live-streaming

I use Ubuntu 16 server with apache and let's encrypt certificates. I have compiled SRS today directly from Git, so I have the latest version. I'm trying to enable HTTPS on all site, I have a player which load HSL stream passed by RTMP. How can I enable SSL? Now I receive the connection closed error. I've tried to move the path of HLS stream to one folder covered by certificate with no results.
This is the link for SRS: https://github.com/ossrs/srs
If someone needs more detail I can reply.

If you use NGINX or CaddyServer, you could set HTTPS proxy for SRS, please read #2881, it works like this:
OBS --> SRS --HTTP--> NGINX --HTTPS--> Viewers
Note: It's a HTTPS reverse proxy, if you need a HLS cluster, please read this.
However, ossrs/srs has support for https but they don't enable it by default. So do this to enable https
I install ossrs/srs by using docker, default config uses port 8088 for https so remember to expose that port for docker
docker run -d -p 1935:1935 -p 1985:1985 -p 8080:8080 -p 8088:8088 \
ossrs/srs:v4 ./objs/srs -c conf/srs.conf
Change http_server part in config file '/usr/local/srs/conf/srs.conf'
Change from
http_server {
enabled on;
listen 8080;
dir ./objs/nginx/html;
}
To
http_server {
enabled on;
listen 8080;
dir ./objs/nginx/html;
https {
# Whether enable HTTPS Streaming.
# default: off
enabled on;
# The listen endpoint for HTTPS Streaming.
# default: 8088
listen 8088;
# The SSL private key file, generated by:
# openssl genrsa -out server.key 2048
# default: ./conf/server.key
key ./conf/server.key;
# The SSL public cert file, generated by:
# openssl req -new -x509 -key server.key -out server.crt -days 3650 -subj "/C=CN/ST=Beijing/L=Beijing/O=Me/OU=Me/CN=ossrs.net"
# default: ./conf/server.crt
cert ./conf/server.crt;
}
}
Remember to upload your server.key and server.crt to conf folder (you can generated self signed certificate by comment guidelines above)
Restart docker to complete

Related

Nginx: curl: (60) SSL certificate problem: unable to get local issuer certificate

I am runing nginx in docker over ssl, when I try to access using url I get below error
root#54a843786818:/# curl --location --request POST 'https://10.1.1.100/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "username": "testuser",
> "password": "testpassword"
> }'
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
With No check certificate option it is working
curl -k --location --request POST 'https://10.1.1.100/login' --header 'Content-Type: application/json' --data-raw '{
"username": "testuser",
"password": "testpassword"
}'
{"access_token": "xxxxxxxxxxxxxxxxxxxxxxxkkkkkkkkkkkkkkkkkkkk", "refresh_token": "qqqqqqqqqoooooooooxxxx"}
My Config file
root#54a843786818:/# cat /etc/nginx/sites-enabled/api.conf
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /root/certs/my_hostname.my.domain.name.com.pem;
ssl_certificate_key /root/certs/my_hostname.my.domain.name.com.key;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_pass http://10.1.1.100:5000;
proxy_redirect off;
}
}
I am suspecting something wrong with my certificates setup.
Below are steps exactly I followed it.
1) Taken private key and removed password using below commands
# openssl rsa -in my_hostname.my.domain.name.com_password_ask.key -out my_hostname.my.domain.name.com.key
2) Converted .crt file .pem
# openssl x509 -in my_hostname.my.domain.name.com.crt -out my_hostname.my.domain.name.com.pem -outform PEM
3) Next copied .pem and .key and pasted under /root/certs on nginx docker container using cat and vim editor
4) Verified private keys and public keys are matching below are the commands used
root#54a843786818:~/certs# openssl rsa -noout -modulus -in my_hostname.my.domain.name.com.key | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
root#54a843786818:~/certs# openssl x509 -noout -modulus -in my_hostname.my.domain.name.com.pem | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
I got below certs separately, not sure I need bundle them, if yes what is the command
1) Certificate.pem
2) private_key
3) ca_intermediate_certificate.pem
4) ca_trusted_root
Can someone help me to fix the issue, I am not sure what I am doing wrong, Is there way I can validate my certificates and check those are able to serve https
or other than certificate, is there any issues like config, setup,
An SSL/TLS server, including HTTPS, needs to send the certificate chain, optionally excluding the root cert. Assuming your filenames are not actively perverse, you have a chain of 3 certs (server, intermediate, and root) and the server must send at least the entity cert and the 'ca_intermediate' cert; it may or may not include the 'trusted_root'.
In nginx this is done by concatenating the certs into one file; see the overview documentation which links to the specific directive ssl_certificate.
Also, the root cert for your server's cert must be in the truststore of the client (every client if more than one). If that root cert is one of the established public CAs (like Digicert, GoDaddy, LetsEncrypt/ISRG) it will normally be already in the default truststores of most clients, usually including curl -- although curl's default can vary depending on how it was built; run curl -V (upper-vee) to see which SSL/TLS implementation it uses. If the root cert is a CA run by your company, the company's sysadmins will usually add it to the truststores on company systems, but if you are using a system that they don't know about or wasn't properly acquired and managed it may not have been set up correctly. If you need curl to accept a root cert that isn't in its default truststore, see the --cacert option on the man page, either on your system if Unixy or on the web. Other clients are different, but you didn't mention any.
Finally, as discussed in comments, the hostname you use in the URL must match the identity(ies) specified in the cert, and certificates are normally issued using only the domain name(s) of the server(s), not the IP address(es). (It is technically possible to have a cert for an IP address, or several, but by cabforum policy public CAs, if they issue certs for addresses at all, must not do so for private addresses such as yours -- 10.0.0.0/8 is one of the private ranges in RFC 1918. A company-run CA might certify such private addresses or it might not.) If your cert specifies domain name(s), you must use that name or one of those names as the host part of your URL; if you don't have your DNS or hosts file set up to resolve that domain name correctly to the host address, you can use curl option --resolve (also on the man page) to override.

Node.js HTTPS configuration error - no common encryption algorithm(s)

I have seen other similar questions but non addresses my problem. I have generated my TLS (openSSL) Self-Signed certificate, but seems not working on my NodeJS server.
Instructions to generate SSL
openssl req -newkey rsa:2048 -keyout key.pem -x509 -days 365 -out certificate.pem
openssl x509 -text -noout -in certificate.pem
openssl pkcs12 -inkey key.pem -in certificate.pem -export -out certificate.p12
openssl pkcs12 -in certificate.p12 -noout -info // verify certificate
So at the end I have .p12 also known as PFX type certificate. Below is my Node.js code:
// ------- Start HTTPS configuration ----------------
const options = {
pfs: fs.readFileSync('./server/security-certificate/certificate.p12'),
passphrase: 'secrete2'
};
https.createServer(options, app).listen(8443);
// -------- End HTTPS configuration -----------------
// Also listen for HTTP
var port = 8000;
app.listen(port, function(){
console.log('running at localhost: '+port);
});
Here is the output when I run curl command, the HTTP request is served correctly, only HTTPS has problem:
Moreover, if I do this:
export CURL_CA_BUNDLE=/var/www/html/node_app/server/security-certificate/cert.p12
Then I get following error:
curl: (77) Problem with the SSL CA cert (path? access rights?)
If I try to access in browser with HTTPS and port, browser says it could not load the page.
Reference links I followed:
Node.js HTTPS:
https://nodejs.org/dist/latest-v8.x/docs/api/https.html#https_https_createserver_options_requestlistener
I'm using AWS RedHat Linux
So far don't know the solution to the above posted problem related to my .p12 bundle certificate (used in my Node.js configuration).
However I have noticed that when I changed the code and tried to use the .pem certificate, it worked correctly with curl -k <MY-URL> command.
const options = {
cert: fs.readFileSync('./server/security-certificate/cert.pem'),
key: fs.readFileSync('./server/security-certificate/key.pem'),
//pfs: fs.readFileSync('./server/security-certificate/cert.p12'), // didn't work
passphrase: 'secrete'
};
https.createServer(options, app).listen(8443);
If any one knows better solution/answer please post that. So far, I'm not sure why .p12 certificate does not work. Should I rename it to .pfx (what is the different and effect)?

AWS Load Balancer, enable listener for HTTPS and route to 80

I've an AWS LoadBalancer in front of two servers running SailsJS with PM2. The LB works very well and routes the incoming HTTP requests to the server, which is perfect.
Now, I need to add support for HTTPS, so I followed this guide:
AWS Create a Classic Load Balancer with an HTTPS Listener, using a self-generated SSL certificate, and used this configuration for the ports
LB Port 80 - Instance 80
LB Port 443 - Instance 80
And the security group has these ports opened:
22,
80,
443
So, if I understood correctly, the LB will receive the HTTPS request on port 443 and will forward it to port 80 of the instance. My instance, of course, is listening on port 80.
The problem is, this don't work! I can make HTTP requests to the LB and all is routed perfectly to the Sails instance and the response is perfect. But if I use exactly the same URL but with HTTPS, then it doesn't work and I get a "ERR_SSL_PROTOCOL_ERROR".
What am I doing wrong, what am I missing?
Thank you!
EDIT 1
This is what I get if I try curl -v https://example.com
* Trying xx.xx.xx.xx...
* Connected to example.com (xx.xx.xx.xx) port 443 (#0)
* Unknown SSL protocol error in connection to example.com:-9838
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to mydomain.com:-9838
EDIT 2
Found another thread which suggested a different way of creating the certificate. So I tried it, but now AWS don't even accept the private key and the certificate
Server Certificate not found for the key: arn:aws:iam::111111111:server-certificate/CertificateMyName
EDIT 3
OK, so found more info on why I couldn't load the certificates to AWS and after trying some times, I managed to load it and use it.
After these, it appears to be working (with the warnings that is not a valid cert and bla, bla, bla, which is expected)
* Trying XXX.XXX.XXX.XXX...
* Connected to example.com (XXX.XXX.XXX.XXX) port 443 (#0)
* SSL certificate problem: Invalid certificate chain
* Closing connection 0
curl: (60) SSL certificate problem: Invalid certificate chain
So it appears to be working, and it appears, as #MarkB suggested that the certificate was wrong. Using the info found on EDIT 2, I created a new one, and upload it (with the info of EDIT 3) and it appears to be working.
I'll perform more tests to make 100% sure that this works and will report back soon.
Ok, so the problem was a wrongly generated certificate. I used the first method I found and it wasn't working, so just use these:
openssl genrsa -out client-key.pem 2048
openssl req -new -key client-key.pem -out client.csr
openssl x509 -req -in client.csr -signkey client-key.pem -out client-cert.pem
Even after that, AWS told me:
Server Certificate not found for the key: arn:aws:iam::111111111:server-certificate/CertificateMyName
But the last error is wrong, even with the error the certificate WAS added to the ACM, and I used it in my HTTPS 443 listener, tested again the services and all was working. So just create the certificate with the instructions above, import it in ACM and if it gives you an error like above, just ignore it, your cert will be on place and ready to use.
Hope this helps others!

Use Docker registry with SSL certifictae without IP SANs

I have a private Docker registry (using this image) running on a cloud server. I want to secure this registry with basic auth and SSL via nginx. But I am new to SSL and run in some problems:
I created SSL certificates with OpenSSL like this:
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private.key -out certificate.crt
Then I copied both files to my cloud server and used it in nginx like this:
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
ssl on;
ssl_certificate /var/certs/certificate.crt;
ssl_certificate_key /var/certs/private.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/sites-enabled/.htpasswd;
proxy_pass http://XX.XX.XX.XX;
}
}
Nginx and the registry are starting and running both. I can go to my server in my browser which presents me a warning about my SSL certificate (so nginx runs and finds the SSL certificate) and when I enter my credentials I can see a ping message from the Docker registry (so the registry is also running).
But when I try to login via Docker I get the following error:
vagrant#ubuntu-13:~$ docker login https://XX.XX.XX.XX
Username: XXX
Password:
Email:
2014/05/05 08:30:59 Error: Invalid Registry endpoint: Get https://XX.XX.XX.XX/v1/_ping: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs
I know this exception means that I have no IP address of the server in my certificate, but is it possible to use the Docker client and ignore the missing IP?
EDIT:
If I use a certificate with the IP of the server it works. But is there any chance to use a SSL certificate without the IP?
It's a Go issue. Actually it's a tech issue and go refused to follow the industry hack thus that's why it's not working. See this https://groups.google.com/forum/#!topic/golang-nuts/LjhVww0TQi4

How to use HTTPS with Node.js

I have little experience with HTTPS, SSL, etc.
I want to know how to use Node.js with HTTPS. I know how to use node.js fine, but when using HTTPS it gives errors.
I think I need to install something (openSSL?). I would like to be told of ALL the things I have to install on a windows 8.1 computer (no, I do not want to get any form of linux. No cygwin either), in order to use a node.js HTTPS server.
I do not need to have a paid certificate, I just need to have it work. It's not receiving requests from a browser, so I don't care about a paid certificate.
Once you have node.js installed on your system, just follow the procedure below to get a basic web server running with support for both HTTP and HTTPS!
Step 1 : Build a Certificate Authority
create the folder where you want to store your key & certificate :
mkdir conf
go to that directory :
cd conf
grab this ca.cnf file to use as a configuration shortcut :
wget https://raw.githubusercontent.com/anders94/https-authorized-clients/master/keys/ca.cnf
create a new certificate authority using this configuration :
openssl req -new -x509 -days 9999 -config ca.cnf -keyout ca-key.pem -out ca-cert.pem
now that we have our certificate authority in ca-key.pem and ca-cert.pem, let's generate a private key for the server :
openssl genrsa -out key.pem 4096
grab this server.cnf file to use as a configuration shortcut :
wget https://raw.githubusercontent.com/anders94/https-authorized-clients/master/keys/server.cnf
generate the certificate signing request using this configuration :
openssl req -new -config server.cnf -key key.pem -out csr.pem
sign the request :
openssl x509 -req -extfile server.cnf -days 999 -passin "pass:password" -in csr.pem -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem
Step 2 : Install your certificate as a root certificate
copy your certificate to your root certificates' folder :
sudo cp ca-crt.pem /usr/local/share/ca-certificates/ca-crt.pem
update CA store :
sudo update-ca-certificates
Step 3 : Starting your node server
First, make sure the code of your server.js looks something like this :
var http = require('http');
var https = require('https');
var fs = require('fs');
var httpsOptions = {
key: fs.readFileSync('/path/to/HTTPS/server-key.pem'),
cert: fs.readFileSync('/path/to/HTTPS/server-crt.pem')
};
var app = function (req, res) {
res.writeHead(200);
res.end("hello world\n");
}
http.createServer(app).listen(8888);
https.createServer(httpsOptions, app).listen(4433);
go to the directory where your server.js is located :
cd /path/to
run server.js :
node server.js
2022 Answer
Get your node.js server working with HTTP on port 80
Use DNS to map <YourWebsite.com> to your server
Use https://certbot.eff.org to upgrade your server to HTTPS
In step 3 you download and run the Certbot app on your server. Certbot asks for "YourWebsite.com". Then it issues you with a new HTTPS certificate and patches your server config files to use the HTTPS certificate.
For example, my node server was running in AWS EC2 listening on port 3000. I found Ubuntu was easier to configure than Amazon's own Linux. I used AWS Route53 to map a domain name to my EC2 instance with a static Elastic IP address. I had installed Nginx in EC2 to map clients' port 80 requests to my server on port 3000. The Certbot automatically patched the Nginx config files to use the new HTTPS certificates.
Certbot is easy. This is because Certbot runs on your server, so the HTTPS certification authority (LetsEncrypt) can verify that you control the domain name by talking over the internet to Certbot.

Resources