I have a private Docker registry (using this image) running on a cloud server. I want to secure this registry with basic auth and SSL via nginx. But I am new to SSL and run in some problems:
I created SSL certificates with OpenSSL like this:
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private.key -out certificate.crt
Then I copied both files to my cloud server and used it in nginx like this:
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
ssl on;
ssl_certificate /var/certs/certificate.crt;
ssl_certificate_key /var/certs/private.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/sites-enabled/.htpasswd;
proxy_pass http://XX.XX.XX.XX;
}
}
Nginx and the registry are starting and running both. I can go to my server in my browser which presents me a warning about my SSL certificate (so nginx runs and finds the SSL certificate) and when I enter my credentials I can see a ping message from the Docker registry (so the registry is also running).
But when I try to login via Docker I get the following error:
vagrant#ubuntu-13:~$ docker login https://XX.XX.XX.XX
Username: XXX
Password:
Email:
2014/05/05 08:30:59 Error: Invalid Registry endpoint: Get https://XX.XX.XX.XX/v1/_ping: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs
I know this exception means that I have no IP address of the server in my certificate, but is it possible to use the Docker client and ignore the missing IP?
EDIT:
If I use a certificate with the IP of the server it works. But is there any chance to use a SSL certificate without the IP?
It's a Go issue. Actually it's a tech issue and go refused to follow the industry hack thus that's why it's not working. See this https://groups.google.com/forum/#!topic/golang-nuts/LjhVww0TQi4
Related
I'm want to deploy my Sveltekit app on HTTPS server. I have key files also. This is my svelte.config.js file
import preprocess from 'svelte-preprocess';
import node from '#sveltejs/adapter-node';
import fs from 'fs';
import https from 'https';
/** #type {import('#sveltejs/kit').Config} **/
const config = {
// Consult https://github.com/sveltejs/svelte-preprocess
// for more information about preprocessors
preprocess: preprocess(),
kit: {
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte',
adapter: node(),
files: { assets: "static" },
vite: {
server: {
https: {
key: fs.readFileSync("path\\privkey.pem"),
cert: fs.readFileSync("path\\cert.pem"),
},
},
}
}
};
export default config;
where should I keep my key files for reading from the config file? I tried a few and got some errors screenshot is attached.
Someone please guide me. Thanks in advance.
I solved it from #sveltejs/adapter-node/README.md#Custom server
The adapter creates two files in your build directory — index.js and handler.js. Running index.js — e.g. node build, if you use the default build directory — will start a server on the configured port.
Alternatively, you can import the handler.js file, which exports a handler suitable for use with Express, Connect or Polka (or even just the built-in http.createServer) and set up your own server
The svelte.config.js use default no changed, run npm run build generate build folder, and touch server.js in project root like this , node server.js and running
import {handler} from './build/handler.js';
import express from 'express';
import fs from 'fs';
import http from 'http';
import https from 'https';
const privateKey = fs.readFileSync('./config/ssl/xx.site.key', 'utf8');
const certificate = fs.readFileSync('./config/ssl/xx.site.crt', 'utf8');
const credentials = {key: privateKey, cert: certificate};
const app = express();
const httpServer = http.createServer(app);
const httpsServer = https.createServer(credentials, app);
const PORT = 80;
const SSLPORT = 443;
httpServer.listen(PORT, function () {
console.log('HTTP Server is running on: http://localhost:%s', PORT);
});
httpsServer.listen(SSLPORT, function () {
console.log('HTTPS Server is running on: https://localhost:%s', SSLPORT);
});
// add a route that lives separately from the SvelteKit app
app.get('/healthcheck', (req, res) => {
res.end('ok');
});
// let SvelteKit handle everything else, including serving prerendered pages and static assets
app.use(handler);
I'm going to explain a solution to my somewhat related problem here, all according to my best understanding, which is limited.
My solution to set up a trusted Sveltekit dev server (running in a private subnet without DNS) was to configure a Nginx reverse proxy that acts as a trusted HTTPS middle man between the Vite server (running in plain HTTP mode) that is bundled in Sveltekit, and the clients (like my Android phone).
I found the most useful guidance from the following resources:
How to use Nginx as a Reverse proxy for HTTPS and WSS - Self Signed Certificates and Trusted Certificates
HMR clientPort option ignored in normal mode
A: Getting Chrome to accept self-signed localhost certificate
How to generate a self-signed SSL certificate for an IP address
The main steps to the solution were:
Become a local certificate authority and register the authority in your clients (like in the Chrome browser on the desktop, or in the Credential storage of an Android phone).
Being a certificate authority, sign a x509 certificate for the IP-address (subjectAltName) of the dev server in the local network.
Setup a Nginx HTTPS reverse proxy (proxy_pass etc.) to forward traffic to the Vite server (typically running in the port 3000). Assign the created certificate and the key for its use. Also add Websocket support as explained in the setup guide linked above.
Declare kit.vite.server.hmr.port = <port of the Nginx proxy> in svelte.config.js. This is important so that the Sveltekit middleware (?) does not try to bypass the proxy.
Relevant snippets from my configuration:
openssl genrsa -out myCA.key 2048
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 10000 -out myCA.pem
openssl genrsa -out 192.168.22.88.key 2048
openssl req -new -key 192.168.22.88.key -out 192.168.22.88.csr
>192.168.22.88.ext cat <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = #alt_names
[alt_names]
IP.1 = 192.168.22.88
IP.2 = 127.0.0.1
DNS.1 = localhost
DNS.2 = localhost.local
EOF
openssl x509 -req -in 192.168.22.88.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out 192.168.22.88.crt -days 10000 -sha256 -extfile 192.168.22.88.ext
openssl dhparam -out dhparam.pem 2048
server {
listen 2200 http2 ssl default_server;
listen [::]:2200 http2 ssl default_server;
ssl_certificate /etc/nginx/ssl/192.168.22.88.crt;
ssl_certificate_key /etc/nginx/ssl/192.168.22.88.key;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
index index.html;
server_name 192.168.22.88;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 90;
proxy_redirect http://127.0.0.1:3000 https://192.168.22.88:2200;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
kit: {
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte',
adapter: adapter(),
vite: {
server: {
hmr: {
port: 2200,
}
}
}
}
pnpm run dev
Build the product and then serve it using nginx or lighttpd. don't use the hmr it really slow down the web performance because the module keep checking the file change. then apply https to lighttpd or your nginx.
if it too much. use npm run preview.
I am runing nginx in docker over ssl, when I try to access using url I get below error
root#54a843786818:/# curl --location --request POST 'https://10.1.1.100/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "username": "testuser",
> "password": "testpassword"
> }'
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
With No check certificate option it is working
curl -k --location --request POST 'https://10.1.1.100/login' --header 'Content-Type: application/json' --data-raw '{
"username": "testuser",
"password": "testpassword"
}'
{"access_token": "xxxxxxxxxxxxxxxxxxxxxxxkkkkkkkkkkkkkkkkkkkk", "refresh_token": "qqqqqqqqqoooooooooxxxx"}
My Config file
root#54a843786818:/# cat /etc/nginx/sites-enabled/api.conf
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /root/certs/my_hostname.my.domain.name.com.pem;
ssl_certificate_key /root/certs/my_hostname.my.domain.name.com.key;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_pass http://10.1.1.100:5000;
proxy_redirect off;
}
}
I am suspecting something wrong with my certificates setup.
Below are steps exactly I followed it.
1) Taken private key and removed password using below commands
# openssl rsa -in my_hostname.my.domain.name.com_password_ask.key -out my_hostname.my.domain.name.com.key
2) Converted .crt file .pem
# openssl x509 -in my_hostname.my.domain.name.com.crt -out my_hostname.my.domain.name.com.pem -outform PEM
3) Next copied .pem and .key and pasted under /root/certs on nginx docker container using cat and vim editor
4) Verified private keys and public keys are matching below are the commands used
root#54a843786818:~/certs# openssl rsa -noout -modulus -in my_hostname.my.domain.name.com.key | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
root#54a843786818:~/certs# openssl x509 -noout -modulus -in my_hostname.my.domain.name.com.pem | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
I got below certs separately, not sure I need bundle them, if yes what is the command
1) Certificate.pem
2) private_key
3) ca_intermediate_certificate.pem
4) ca_trusted_root
Can someone help me to fix the issue, I am not sure what I am doing wrong, Is there way I can validate my certificates and check those are able to serve https
or other than certificate, is there any issues like config, setup,
An SSL/TLS server, including HTTPS, needs to send the certificate chain, optionally excluding the root cert. Assuming your filenames are not actively perverse, you have a chain of 3 certs (server, intermediate, and root) and the server must send at least the entity cert and the 'ca_intermediate' cert; it may or may not include the 'trusted_root'.
In nginx this is done by concatenating the certs into one file; see the overview documentation which links to the specific directive ssl_certificate.
Also, the root cert for your server's cert must be in the truststore of the client (every client if more than one). If that root cert is one of the established public CAs (like Digicert, GoDaddy, LetsEncrypt/ISRG) it will normally be already in the default truststores of most clients, usually including curl -- although curl's default can vary depending on how it was built; run curl -V (upper-vee) to see which SSL/TLS implementation it uses. If the root cert is a CA run by your company, the company's sysadmins will usually add it to the truststores on company systems, but if you are using a system that they don't know about or wasn't properly acquired and managed it may not have been set up correctly. If you need curl to accept a root cert that isn't in its default truststore, see the --cacert option on the man page, either on your system if Unixy or on the web. Other clients are different, but you didn't mention any.
Finally, as discussed in comments, the hostname you use in the URL must match the identity(ies) specified in the cert, and certificates are normally issued using only the domain name(s) of the server(s), not the IP address(es). (It is technically possible to have a cert for an IP address, or several, but by cabforum policy public CAs, if they issue certs for addresses at all, must not do so for private addresses such as yours -- 10.0.0.0/8 is one of the private ranges in RFC 1918. A company-run CA might certify such private addresses or it might not.) If your cert specifies domain name(s), you must use that name or one of those names as the host part of your URL; if you don't have your DNS or hosts file set up to resolve that domain name correctly to the host address, you can use curl option --resolve (also on the man page) to override.
I'm currently configuring two rapsberry pi's on my home network. One which serves data from sensors on a node server to the second pi (a webserver, also running on node). Both of them are behind a nginx proxy. After a lot of configuring and searching i found a working solution. The Webserver is using dataplicity to make it accessible for www. I don't use dataplicity on the second pi (the server of sensordata) :
server {
listen 80;
server:name *ip-address*
location / {
proxy_set_header X-forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}
}
server {
listen 443 ssl;
server_name *ip-address*
ssl on;
ssl_certificate /var/www/cert.pem
ssl_certificate_key /var/www/key.pem
location /{
add_header "Access-control-allow-origin" *;
proxy_pass http://127.0.0.1:3000;
}
}
This config works. however, ONLY on my computer. From other computers i get ERR_INSECURE_RESPONSE when trying to access the api with ajax-request. the certificates is self-signed.. Help is much appriciated.
EDIT:
Still no fix for this problem. I signed up for dataplicity for my second device as well. This fixed my problem but it now runs through a third party. Will look into this in the future. So if anyone has an answer to this, please do tell.
It seems that your certificate aren't correct, root certificate missing ? (it can work on your computer if you have already accept insecure certificate on your browser).
Check if your certificates are good, the following command must give the same result :
openssl x509 -noout -modulus -in mycert.crt | openssl md5
openssl rsa -noout -modulus -in mycert.key | openssl md5
openssl x509 -noout -modulus -in mycert.pem | openssl md5
If one ouput differs from the other, the certificate has been bad generated.
You can also check it directly on your computer with curl :
curl -v -i https://yourwebsite
If the top of the ouput show an insecure warning the certificate has been bad generated.
The post above looks about right.
The certificates and/or SSL is being rejected by your client.
This could be a few things, assuming the certificates themselves are publicly signed (they probably are not).
Date and time mismatch is possible (certificates are sensitive to the system clock).
If your certs are self-signed, you'll need to make sure your remote device is configured to accept your private root certificate.
Lastly, you might need to configure your server to use only modern encryption methods. Your client may be rejecting some older methods if it has been updated since the POODLE attacks.
This post should let you create a certificate https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04, though I think you've already made it this far.
This post https://unix.stackexchange.com/questions/90450/adding-a-self-signed-certificate-to-the-trusted-list will let you add your new private root cert to the trusted list on your client.
And finally this is recommended SSL config in Ubuntu (sourced from here https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-on-ubuntu-14-04).
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
Or if you get really stuck, just PM me your account details I'll put a second free device on your Dataplicity account:)
Cool project, keen to help out.
Dataplicity Wormhole redirects a service listening on port 80 on the device to a public URL in the form https://*.dataplicity.io, and puts a dataplicity certificate in front. Due to the way HTTPS works, the port being redirected via dataplicity cannot use HTTPS, as it would mean we are unable to forward the traffic via the dataplicity.io domain. The tunnel from your device to Dataplicity is encrypted anyway.
Is there a reason you prefer not to run Dataplicity on the second Pi? While you can run a webserver locally of course, this would be a lot easier and more portable across networks if you just installed a second instance of Dataplicity on your second device...
I'm trying to password protect the spark web ui of my spark cluster. I've looked at the security doc. Usually the spark doc has many examples on how to do things, but for some reason, none is provided in this case. I don't feel comfortable enough for creating my own javax servlet filter, nor properly connecting it to whatever it is supposed to be connected to.
So I've tried protecting it with an nginx htaccess setup - this would be way enough for my purpose. unfortunately, when I run the cluster it avoids the 8080 port and switches to 8081 - saying that 8080 is not accessible.
Has anyone tried to password protect a spark web ui?
Disclaimer: It is an extremely naive approach and you shouldn't depend on it in a production environment. Moreover I assume you don't use this instance of Nginx and you have access to standard ports (80|443).
Configure Spark to use a port of your choice. You can use SPARK_MASTER_WEBUI_PORT variable. Below I assume it is 8080.
Generate self-signed certificates for your server. You can find multiple good resources how to do it so just to make this answer complete lets use example from Linode guide:
openssl req -new -x509 -sha256 -days 365 -nodes -out /path/to/nginx.pem -keyout /path/to/nginx.key
Make sure that key has limited access rights
chmod 400 /path/to/nginx.key
Generate htpasswd file
htpasswd -b -c /path/to/passwdfile username password
Remove default configuration from nginx/sites-enabled
Create a simple reverse proxy configuration and add it to ``nginx/sites-enabled`
server {
# Adjust port number if cannot use ports below 1024
listen 443 ssl;
ssl_certificate /path/to/nginx.pem;
ssl_certificate_key /path/to/nginx.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8080;
auth_basic "closed site";
auth_basic_user_file /path/to/passwdfile;
}
}
Configure your system to reject remote connections to web UI port.
To make it work web UI has to be accessible from localhost so everyone who has access to your master can reach web UI directly.
How do you create a self signed SSL certificate to use on local server on mac 10.9?
I require my localhost serving as https://localhost
I am using the linkedin API. The feature which requires the ssl on local host is explained here.
https://developer.linkedin.com/documents/exchange-jsapi-tokens-rest-api-oauth-tokens
In brief, linkedin will send the client a bearer token after the client authorises my app to access their data. The built in javascript library by linkedin will automatically send this cookie to my server / backend. This json file info is used for user authentication.
However, linkedin will not send the private cookie if the server is not https.
Quick and easy solution that works in dev/prod mode, using http-proxy ontop of your app.
1) Add in the tarang:ssl package
meteor add tarang:ssl
2) Add your certificate and key to a directory in your app /private, e.g /private/key.pem and /private/cert.pem
Then in your /server code
Meteor.startup(function() {
SSLProxy({
port: 6000, //or 443 (normal port/requires sudo)
ssl : {
key: Assets.getText("key.pem"),
cert: Assets.getText("cert.pem"),
//Optional CA
//Assets.getText("ca.pem")
}
});
});
Then fire up your app and load up https://localhost:6000. Be sure not to mix up your ports with https and http as they are served seperately.
With this I'm assuming you know how to create your own self signed certificate, there are loads of resources on how to do this. Just in case here are some links.
http://www.akadia.com/services/ssh_test_certificate.html
https://devcenter.heroku.com/articles/ssl-certificate-self
An alternative to self signed certs: it may be better to use an official certificate for your apps domain and use /etc/hosts to create a loopback on your local computer too. This is because its tedious to have to switch certs between dev and prod.
Or you could just use ngrok to port forward :)
1) start your server (i.e. at localhost:3000)
2) start ngrok from command line: ./ngrok http 3000
that should give you http and https urls to access from any device
Other solution is to use NGINX. Following steps are tested on Mac El Capitan, assuming your local website runs on port 3000 :
1. Add a host to your local machine :
Edit your host file : vi /etc/hosts
Add a line for your local dev domain : 127.0.0.1 dev.yourdomain.com
Flush your cache dscacheutil -flushcache
Now you should be able to reach your local website with http://dev.yourdomain.com:3000
2. Create a self signed SSL like explained here : http://mac-blog.org.ua/self-signed-ssl-for-nginx/
3. Install nginx and configure it to map https traffic to your local website:
brew install nginx
sudo nginx
Now you should be able to reach http://localhost:8080 and get an Nginx message.
This is the default conf so now you have to set the https conf :
Edit your conf file :
vi /usr/local/etc/nginx/nginx.conf
Uncomment the HTTPS server section and change following lines :
server_name dev.yourdomain.com;
Put your certificates you just created :
ssl_certificate /path-to-your-keys/nginx.pem;
ssl_certificate_key /path-to-your-keys/nginx.key;
Change the location section with this one:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
Restart nginx :
sudo nginx -s stop
sudo nginx
And now you should be able to access https://dev.yourdomain.com