I'm trying to password protect the spark web ui of my spark cluster. I've looked at the security doc. Usually the spark doc has many examples on how to do things, but for some reason, none is provided in this case. I don't feel comfortable enough for creating my own javax servlet filter, nor properly connecting it to whatever it is supposed to be connected to.
So I've tried protecting it with an nginx htaccess setup - this would be way enough for my purpose. unfortunately, when I run the cluster it avoids the 8080 port and switches to 8081 - saying that 8080 is not accessible.
Has anyone tried to password protect a spark web ui?
Disclaimer: It is an extremely naive approach and you shouldn't depend on it in a production environment. Moreover I assume you don't use this instance of Nginx and you have access to standard ports (80|443).
Configure Spark to use a port of your choice. You can use SPARK_MASTER_WEBUI_PORT variable. Below I assume it is 8080.
Generate self-signed certificates for your server. You can find multiple good resources how to do it so just to make this answer complete lets use example from Linode guide:
openssl req -new -x509 -sha256 -days 365 -nodes -out /path/to/nginx.pem -keyout /path/to/nginx.key
Make sure that key has limited access rights
chmod 400 /path/to/nginx.key
Generate htpasswd file
htpasswd -b -c /path/to/passwdfile username password
Remove default configuration from nginx/sites-enabled
Create a simple reverse proxy configuration and add it to ``nginx/sites-enabled`
server {
# Adjust port number if cannot use ports below 1024
listen 443 ssl;
ssl_certificate /path/to/nginx.pem;
ssl_certificate_key /path/to/nginx.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8080;
auth_basic "closed site";
auth_basic_user_file /path/to/passwdfile;
}
}
Configure your system to reject remote connections to web UI port.
To make it work web UI has to be accessible from localhost so everyone who has access to your master can reach web UI directly.
Related
When i want to access to Node Red via Brower I typ 192.168.0.24:1880/ui/.
Now i just want to reach the Node-Red side have an local Domain something like website.test.
I already have changed my port form 1880 Port to 80.
Also i edit the /etc/hosts file to -> 192.168.0.24 website.test
But when I test it, I cant access to the Node-Red Website with this Domain.
Does anyone know how to accomplish this?
First, to bind Node-RED to port 80 will require you to run it as root on the pi. This is a VERY BAD idea unless you 100% understand the consequences as it opens up a LOT of potential security issues.
A better solution is to change the address that Node-RED binds to to be 127.0.0.1 so it only listens on localhost then use something like nginx to proxy it on to port 80.
You can change the bind address by uncommenting the uiHost line at the top of the settings.js file in the userDir (which is normally /home/pi/.node-red)
A basic nginx config would look like this:
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:1880;
#Defines the HTTP protocol version for proxying
#by default it it set to 1.0.
#For Websockets and keepalive connections you need to use the version 1.1
proxy_http_version 1.1;
#Sets conditions under which the response will not be taken from a cache.
proxy_cache_bypass $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Upgrade $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Connection "upgrade";
#The $host variable in the following order of precedence contains:
#hostname from the request line, or hostname from the Host request header field
#or the server name matching a request.
proxy_set_header Host $host;
}
}
Second, editing the /etc/hosts file on the pi will only map a hostname for processes running on the pi not anywhere where (e.g. on your pc).
If your other machine is another Linux machine or a Mac then you can probably use mDNS to access the pi which by default will be found on raspberrypi.local. Unfortunately Windows doesn't support mDNS so that won't work (You may be able to add support by installing some printer packages from Apple).
You can edit the hosts file on Windows (C:\Windows\System32\drivers\etc\hosts) but this isn't a great solution as you will need to do it to every machine that wants to access the Node-RED instance.
Other options include adding an entry on your local router or running your own DNS server, but both of those options are far too complicated to get into here.
I'm currently configuring two rapsberry pi's on my home network. One which serves data from sensors on a node server to the second pi (a webserver, also running on node). Both of them are behind a nginx proxy. After a lot of configuring and searching i found a working solution. The Webserver is using dataplicity to make it accessible for www. I don't use dataplicity on the second pi (the server of sensordata) :
server {
listen 80;
server:name *ip-address*
location / {
proxy_set_header X-forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}
}
server {
listen 443 ssl;
server_name *ip-address*
ssl on;
ssl_certificate /var/www/cert.pem
ssl_certificate_key /var/www/key.pem
location /{
add_header "Access-control-allow-origin" *;
proxy_pass http://127.0.0.1:3000;
}
}
This config works. however, ONLY on my computer. From other computers i get ERR_INSECURE_RESPONSE when trying to access the api with ajax-request. the certificates is self-signed.. Help is much appriciated.
EDIT:
Still no fix for this problem. I signed up for dataplicity for my second device as well. This fixed my problem but it now runs through a third party. Will look into this in the future. So if anyone has an answer to this, please do tell.
It seems that your certificate aren't correct, root certificate missing ? (it can work on your computer if you have already accept insecure certificate on your browser).
Check if your certificates are good, the following command must give the same result :
openssl x509 -noout -modulus -in mycert.crt | openssl md5
openssl rsa -noout -modulus -in mycert.key | openssl md5
openssl x509 -noout -modulus -in mycert.pem | openssl md5
If one ouput differs from the other, the certificate has been bad generated.
You can also check it directly on your computer with curl :
curl -v -i https://yourwebsite
If the top of the ouput show an insecure warning the certificate has been bad generated.
The post above looks about right.
The certificates and/or SSL is being rejected by your client.
This could be a few things, assuming the certificates themselves are publicly signed (they probably are not).
Date and time mismatch is possible (certificates are sensitive to the system clock).
If your certs are self-signed, you'll need to make sure your remote device is configured to accept your private root certificate.
Lastly, you might need to configure your server to use only modern encryption methods. Your client may be rejecting some older methods if it has been updated since the POODLE attacks.
This post should let you create a certificate https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04, though I think you've already made it this far.
This post https://unix.stackexchange.com/questions/90450/adding-a-self-signed-certificate-to-the-trusted-list will let you add your new private root cert to the trusted list on your client.
And finally this is recommended SSL config in Ubuntu (sourced from here https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-on-ubuntu-14-04).
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
Or if you get really stuck, just PM me your account details I'll put a second free device on your Dataplicity account:)
Cool project, keen to help out.
Dataplicity Wormhole redirects a service listening on port 80 on the device to a public URL in the form https://*.dataplicity.io, and puts a dataplicity certificate in front. Due to the way HTTPS works, the port being redirected via dataplicity cannot use HTTPS, as it would mean we are unable to forward the traffic via the dataplicity.io domain. The tunnel from your device to Dataplicity is encrypted anyway.
Is there a reason you prefer not to run Dataplicity on the second Pi? While you can run a webserver locally of course, this would be a lot easier and more portable across networks if you just installed a second instance of Dataplicity on your second device...
How do you create a self signed SSL certificate to use on local server on mac 10.9?
I require my localhost serving as https://localhost
I am using the linkedin API. The feature which requires the ssl on local host is explained here.
https://developer.linkedin.com/documents/exchange-jsapi-tokens-rest-api-oauth-tokens
In brief, linkedin will send the client a bearer token after the client authorises my app to access their data. The built in javascript library by linkedin will automatically send this cookie to my server / backend. This json file info is used for user authentication.
However, linkedin will not send the private cookie if the server is not https.
Quick and easy solution that works in dev/prod mode, using http-proxy ontop of your app.
1) Add in the tarang:ssl package
meteor add tarang:ssl
2) Add your certificate and key to a directory in your app /private, e.g /private/key.pem and /private/cert.pem
Then in your /server code
Meteor.startup(function() {
SSLProxy({
port: 6000, //or 443 (normal port/requires sudo)
ssl : {
key: Assets.getText("key.pem"),
cert: Assets.getText("cert.pem"),
//Optional CA
//Assets.getText("ca.pem")
}
});
});
Then fire up your app and load up https://localhost:6000. Be sure not to mix up your ports with https and http as they are served seperately.
With this I'm assuming you know how to create your own self signed certificate, there are loads of resources on how to do this. Just in case here are some links.
http://www.akadia.com/services/ssh_test_certificate.html
https://devcenter.heroku.com/articles/ssl-certificate-self
An alternative to self signed certs: it may be better to use an official certificate for your apps domain and use /etc/hosts to create a loopback on your local computer too. This is because its tedious to have to switch certs between dev and prod.
Or you could just use ngrok to port forward :)
1) start your server (i.e. at localhost:3000)
2) start ngrok from command line: ./ngrok http 3000
that should give you http and https urls to access from any device
Other solution is to use NGINX. Following steps are tested on Mac El Capitan, assuming your local website runs on port 3000 :
1. Add a host to your local machine :
Edit your host file : vi /etc/hosts
Add a line for your local dev domain : 127.0.0.1 dev.yourdomain.com
Flush your cache dscacheutil -flushcache
Now you should be able to reach your local website with http://dev.yourdomain.com:3000
2. Create a self signed SSL like explained here : http://mac-blog.org.ua/self-signed-ssl-for-nginx/
3. Install nginx and configure it to map https traffic to your local website:
brew install nginx
sudo nginx
Now you should be able to reach http://localhost:8080 and get an Nginx message.
This is the default conf so now you have to set the https conf :
Edit your conf file :
vi /usr/local/etc/nginx/nginx.conf
Uncomment the HTTPS server section and change following lines :
server_name dev.yourdomain.com;
Put your certificates you just created :
ssl_certificate /path-to-your-keys/nginx.pem;
ssl_certificate_key /path-to-your-keys/nginx.key;
Change the location section with this one:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
Restart nginx :
sudo nginx -s stop
sudo nginx
And now you should be able to access https://dev.yourdomain.com
I have a private Docker registry (using this image) running on a cloud server. I want to secure this registry with basic auth and SSL via nginx. But I am new to SSL and run in some problems:
I created SSL certificates with OpenSSL like this:
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private.key -out certificate.crt
Then I copied both files to my cloud server and used it in nginx like this:
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
ssl on;
ssl_certificate /var/certs/certificate.crt;
ssl_certificate_key /var/certs/private.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/sites-enabled/.htpasswd;
proxy_pass http://XX.XX.XX.XX;
}
}
Nginx and the registry are starting and running both. I can go to my server in my browser which presents me a warning about my SSL certificate (so nginx runs and finds the SSL certificate) and when I enter my credentials I can see a ping message from the Docker registry (so the registry is also running).
But when I try to login via Docker I get the following error:
vagrant#ubuntu-13:~$ docker login https://XX.XX.XX.XX
Username: XXX
Password:
Email:
2014/05/05 08:30:59 Error: Invalid Registry endpoint: Get https://XX.XX.XX.XX/v1/_ping: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs
I know this exception means that I have no IP address of the server in my certificate, but is it possible to use the Docker client and ignore the missing IP?
EDIT:
If I use a certificate with the IP of the server it works. But is there any chance to use a SSL certificate without the IP?
It's a Go issue. Actually it's a tech issue and go refused to follow the industry hack thus that's why it's not working. See this https://groups.google.com/forum/#!topic/golang-nuts/LjhVww0TQi4
I have Varnish load balancing three front end Rails servers with Nginx acting as a reverse proxy for FastCGI workers. Yesterday, our certificate expired, and I got a new certificate from GoDaddy, and installed it. When accessing static resources directly, I see the updated certificate, but when accessing them from a "virtual subdomain" I'm seeing the old certificate. My nginx config only cites my new chained certificate, so I'm wondering how the old certificate is being displayed. I've even removed it from the directory.
example:
https://www212.doostang.com/javascripts/base_packaged.js?1331831461 (no certificate problem with SSL)
https://asset5.doostang.com/javascripts/base_packaged.js?1331831461 (the old certificate is being used!) (maps to www212.doostang.com)
I've reloaded and even stopped-and-restarted nginx, tested nginx to make sure that it's reading from the right config, and restarted varnish with a new cache file.
When I curl the file at asset5.doostang.com I get a certificate error:
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
When I add the -k option, I get the file requested, and I can see it in my nginx access log. I don't get an nginx error when I don't provide the -k; nginx is silent about the certificate error.
10.99.110.27 - - [20/Apr/2012:18:02:52 -0700] "GET /javascripts/base_packaged.js?1331831461 HTTP/1.0" 200 5740 "-"
"curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o
zlib/1.2.3.4 libidn/1.18"
I've put what I think is the relevant part of the nginx config, below:
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
server_name www.doostang.com, *.doostang.com;
passenger_enabled on;
rails_env production;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
ssl_protocols SSLv3;
# doc root
root /.../public/files;
if ($host = 'doostang.com' ) {
rewrite ^/(.*)$ https://www.doostang.com/$1 permanent;
}
}
# Catchall redirect
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
rewrite ^(.*)$ https://www.doostang.com$1;
}
Ba dum ching. My non-standardized load balancer actually had nginx running for SSL termination. I failed to notice this, but I think I did everything else correctly. Point being, when you take over operations upon acquisition, standardize and document! There are some really odd engineers out there :)