I'm want to deploy my Sveltekit app on HTTPS server. I have key files also. This is my svelte.config.js file
import preprocess from 'svelte-preprocess';
import node from '#sveltejs/adapter-node';
import fs from 'fs';
import https from 'https';
/** #type {import('#sveltejs/kit').Config} **/
const config = {
// Consult https://github.com/sveltejs/svelte-preprocess
// for more information about preprocessors
preprocess: preprocess(),
kit: {
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte',
adapter: node(),
files: { assets: "static" },
vite: {
server: {
https: {
key: fs.readFileSync("path\\privkey.pem"),
cert: fs.readFileSync("path\\cert.pem"),
},
},
}
}
};
export default config;
where should I keep my key files for reading from the config file? I tried a few and got some errors screenshot is attached.
Someone please guide me. Thanks in advance.
I solved it from #sveltejs/adapter-node/README.md#Custom server
The adapter creates two files in your build directory — index.js and handler.js. Running index.js — e.g. node build, if you use the default build directory — will start a server on the configured port.
Alternatively, you can import the handler.js file, which exports a handler suitable for use with Express, Connect or Polka (or even just the built-in http.createServer) and set up your own server
The svelte.config.js use default no changed, run npm run build generate build folder, and touch server.js in project root like this , node server.js and running
import {handler} from './build/handler.js';
import express from 'express';
import fs from 'fs';
import http from 'http';
import https from 'https';
const privateKey = fs.readFileSync('./config/ssl/xx.site.key', 'utf8');
const certificate = fs.readFileSync('./config/ssl/xx.site.crt', 'utf8');
const credentials = {key: privateKey, cert: certificate};
const app = express();
const httpServer = http.createServer(app);
const httpsServer = https.createServer(credentials, app);
const PORT = 80;
const SSLPORT = 443;
httpServer.listen(PORT, function () {
console.log('HTTP Server is running on: http://localhost:%s', PORT);
});
httpsServer.listen(SSLPORT, function () {
console.log('HTTPS Server is running on: https://localhost:%s', SSLPORT);
});
// add a route that lives separately from the SvelteKit app
app.get('/healthcheck', (req, res) => {
res.end('ok');
});
// let SvelteKit handle everything else, including serving prerendered pages and static assets
app.use(handler);
I'm going to explain a solution to my somewhat related problem here, all according to my best understanding, which is limited.
My solution to set up a trusted Sveltekit dev server (running in a private subnet without DNS) was to configure a Nginx reverse proxy that acts as a trusted HTTPS middle man between the Vite server (running in plain HTTP mode) that is bundled in Sveltekit, and the clients (like my Android phone).
I found the most useful guidance from the following resources:
How to use Nginx as a Reverse proxy for HTTPS and WSS - Self Signed Certificates and Trusted Certificates
HMR clientPort option ignored in normal mode
A: Getting Chrome to accept self-signed localhost certificate
How to generate a self-signed SSL certificate for an IP address
The main steps to the solution were:
Become a local certificate authority and register the authority in your clients (like in the Chrome browser on the desktop, or in the Credential storage of an Android phone).
Being a certificate authority, sign a x509 certificate for the IP-address (subjectAltName) of the dev server in the local network.
Setup a Nginx HTTPS reverse proxy (proxy_pass etc.) to forward traffic to the Vite server (typically running in the port 3000). Assign the created certificate and the key for its use. Also add Websocket support as explained in the setup guide linked above.
Declare kit.vite.server.hmr.port = <port of the Nginx proxy> in svelte.config.js. This is important so that the Sveltekit middleware (?) does not try to bypass the proxy.
Relevant snippets from my configuration:
openssl genrsa -out myCA.key 2048
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 10000 -out myCA.pem
openssl genrsa -out 192.168.22.88.key 2048
openssl req -new -key 192.168.22.88.key -out 192.168.22.88.csr
>192.168.22.88.ext cat <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = #alt_names
[alt_names]
IP.1 = 192.168.22.88
IP.2 = 127.0.0.1
DNS.1 = localhost
DNS.2 = localhost.local
EOF
openssl x509 -req -in 192.168.22.88.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out 192.168.22.88.crt -days 10000 -sha256 -extfile 192.168.22.88.ext
openssl dhparam -out dhparam.pem 2048
server {
listen 2200 http2 ssl default_server;
listen [::]:2200 http2 ssl default_server;
ssl_certificate /etc/nginx/ssl/192.168.22.88.crt;
ssl_certificate_key /etc/nginx/ssl/192.168.22.88.key;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
index index.html;
server_name 192.168.22.88;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 90;
proxy_redirect http://127.0.0.1:3000 https://192.168.22.88:2200;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
kit: {
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte',
adapter: adapter(),
vite: {
server: {
hmr: {
port: 2200,
}
}
}
}
pnpm run dev
Build the product and then serve it using nginx or lighttpd. don't use the hmr it really slow down the web performance because the module keep checking the file change. then apply https to lighttpd or your nginx.
if it too much. use npm run preview.
Related
I have an application written in node.js thats working thru normal ws. I am trying to make it work with wss, but it just not happening. I need wss because chrome will not allow me to use camera/microphone on unsecure connection.
I got certificate, HTTPS is working.
I also included this in my node.js server app:
// Minimal amount of secure websocket server
var fs = require('fs');
// read ssl certificate
var privateKey = fs.readFileSync('ssl-cert/privkey.pem', 'utf8');
var certificate = fs.readFileSync('ssl-cert/fullchain.pem', 'utf8');
var credentials = { key: privateKey, cert: certificate };
var https = require('https');
//pass in your credentials to create an https server
var httpsServer = https.createServer(credentials);
httpsServer.listen(9090);
var WebSocketServer = require('ws').Server;
I also edited nginx.conf:
tcp {
upstream websockets {
## webbit websocket server in background
server 89.221.222.68:9090;
check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
server_name _;
listen 9090;
ssl on;
proxy_ssl_certificate /ssl-cert/fullchain.pem;
proxy_ssl_certificate_key /ssl-cert/privkey.pem;
timeout 43200000;
websocket_connect_timeout 43200000;
proxy_connect_timeout 43200000;
so_keepalive on;
tcp_nodelay on;
websocket_pass websockets;
websocket_buffer 1k;
}
}
I also tried setting ProxyPass in apache2 config, but nothing seems to work.
I do not know where is the problem and mi limited linux experience is not helping me there.
So my question:
How you can setup wss from scratch? I have node.js app running at server.com/9090 which I need to communicate with from the client side.
Forever plugin wasnt working and server was not running. Its working now.
I'm currently configuring two rapsberry pi's on my home network. One which serves data from sensors on a node server to the second pi (a webserver, also running on node). Both of them are behind a nginx proxy. After a lot of configuring and searching i found a working solution. The Webserver is using dataplicity to make it accessible for www. I don't use dataplicity on the second pi (the server of sensordata) :
server {
listen 80;
server:name *ip-address*
location / {
proxy_set_header X-forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}
}
server {
listen 443 ssl;
server_name *ip-address*
ssl on;
ssl_certificate /var/www/cert.pem
ssl_certificate_key /var/www/key.pem
location /{
add_header "Access-control-allow-origin" *;
proxy_pass http://127.0.0.1:3000;
}
}
This config works. however, ONLY on my computer. From other computers i get ERR_INSECURE_RESPONSE when trying to access the api with ajax-request. the certificates is self-signed.. Help is much appriciated.
EDIT:
Still no fix for this problem. I signed up for dataplicity for my second device as well. This fixed my problem but it now runs through a third party. Will look into this in the future. So if anyone has an answer to this, please do tell.
It seems that your certificate aren't correct, root certificate missing ? (it can work on your computer if you have already accept insecure certificate on your browser).
Check if your certificates are good, the following command must give the same result :
openssl x509 -noout -modulus -in mycert.crt | openssl md5
openssl rsa -noout -modulus -in mycert.key | openssl md5
openssl x509 -noout -modulus -in mycert.pem | openssl md5
If one ouput differs from the other, the certificate has been bad generated.
You can also check it directly on your computer with curl :
curl -v -i https://yourwebsite
If the top of the ouput show an insecure warning the certificate has been bad generated.
The post above looks about right.
The certificates and/or SSL is being rejected by your client.
This could be a few things, assuming the certificates themselves are publicly signed (they probably are not).
Date and time mismatch is possible (certificates are sensitive to the system clock).
If your certs are self-signed, you'll need to make sure your remote device is configured to accept your private root certificate.
Lastly, you might need to configure your server to use only modern encryption methods. Your client may be rejecting some older methods if it has been updated since the POODLE attacks.
This post should let you create a certificate https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04, though I think you've already made it this far.
This post https://unix.stackexchange.com/questions/90450/adding-a-self-signed-certificate-to-the-trusted-list will let you add your new private root cert to the trusted list on your client.
And finally this is recommended SSL config in Ubuntu (sourced from here https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-on-ubuntu-14-04).
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
Or if you get really stuck, just PM me your account details I'll put a second free device on your Dataplicity account:)
Cool project, keen to help out.
Dataplicity Wormhole redirects a service listening on port 80 on the device to a public URL in the form https://*.dataplicity.io, and puts a dataplicity certificate in front. Due to the way HTTPS works, the port being redirected via dataplicity cannot use HTTPS, as it would mean we are unable to forward the traffic via the dataplicity.io domain. The tunnel from your device to Dataplicity is encrypted anyway.
Is there a reason you prefer not to run Dataplicity on the second Pi? While you can run a webserver locally of course, this would be a lot easier and more portable across networks if you just installed a second instance of Dataplicity on your second device...
on my apache server, I used that to allow nodejs to use ssl
var ssl = {
key: fs.readFileSync('/etc/letsencrypt/live/mysite.com/privkey.pem'),
cert: fs.readFileSync('/etc/letsencrypt/live/mysite.com/cert.pem')
};
My client switched his http server to Nginx ruled by PLESK
I tried with:
var ssl = {
key: fs.readFileSync('/usr/local/psa/var/certificates/cert-lcQuQ3'),
cert: fs.readFileSync('/usr/local/psa/var/certificates/cert-RVySSD')
};
and not good: Infact I have no idea where are the equivalent to privkey.pem and cert.pem with nginx
Any idea .
How do you create a self signed SSL certificate to use on local server on mac 10.9?
I require my localhost serving as https://localhost
I am using the linkedin API. The feature which requires the ssl on local host is explained here.
https://developer.linkedin.com/documents/exchange-jsapi-tokens-rest-api-oauth-tokens
In brief, linkedin will send the client a bearer token after the client authorises my app to access their data. The built in javascript library by linkedin will automatically send this cookie to my server / backend. This json file info is used for user authentication.
However, linkedin will not send the private cookie if the server is not https.
Quick and easy solution that works in dev/prod mode, using http-proxy ontop of your app.
1) Add in the tarang:ssl package
meteor add tarang:ssl
2) Add your certificate and key to a directory in your app /private, e.g /private/key.pem and /private/cert.pem
Then in your /server code
Meteor.startup(function() {
SSLProxy({
port: 6000, //or 443 (normal port/requires sudo)
ssl : {
key: Assets.getText("key.pem"),
cert: Assets.getText("cert.pem"),
//Optional CA
//Assets.getText("ca.pem")
}
});
});
Then fire up your app and load up https://localhost:6000. Be sure not to mix up your ports with https and http as they are served seperately.
With this I'm assuming you know how to create your own self signed certificate, there are loads of resources on how to do this. Just in case here are some links.
http://www.akadia.com/services/ssh_test_certificate.html
https://devcenter.heroku.com/articles/ssl-certificate-self
An alternative to self signed certs: it may be better to use an official certificate for your apps domain and use /etc/hosts to create a loopback on your local computer too. This is because its tedious to have to switch certs between dev and prod.
Or you could just use ngrok to port forward :)
1) start your server (i.e. at localhost:3000)
2) start ngrok from command line: ./ngrok http 3000
that should give you http and https urls to access from any device
Other solution is to use NGINX. Following steps are tested on Mac El Capitan, assuming your local website runs on port 3000 :
1. Add a host to your local machine :
Edit your host file : vi /etc/hosts
Add a line for your local dev domain : 127.0.0.1 dev.yourdomain.com
Flush your cache dscacheutil -flushcache
Now you should be able to reach your local website with http://dev.yourdomain.com:3000
2. Create a self signed SSL like explained here : http://mac-blog.org.ua/self-signed-ssl-for-nginx/
3. Install nginx and configure it to map https traffic to your local website:
brew install nginx
sudo nginx
Now you should be able to reach http://localhost:8080 and get an Nginx message.
This is the default conf so now you have to set the https conf :
Edit your conf file :
vi /usr/local/etc/nginx/nginx.conf
Uncomment the HTTPS server section and change following lines :
server_name dev.yourdomain.com;
Put your certificates you just created :
ssl_certificate /path-to-your-keys/nginx.pem;
ssl_certificate_key /path-to-your-keys/nginx.key;
Change the location section with this one:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
Restart nginx :
sudo nginx -s stop
sudo nginx
And now you should be able to access https://dev.yourdomain.com
I have a private Docker registry (using this image) running on a cloud server. I want to secure this registry with basic auth and SSL via nginx. But I am new to SSL and run in some problems:
I created SSL certificates with OpenSSL like this:
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private.key -out certificate.crt
Then I copied both files to my cloud server and used it in nginx like this:
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
ssl on;
ssl_certificate /var/certs/certificate.crt;
ssl_certificate_key /var/certs/private.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/sites-enabled/.htpasswd;
proxy_pass http://XX.XX.XX.XX;
}
}
Nginx and the registry are starting and running both. I can go to my server in my browser which presents me a warning about my SSL certificate (so nginx runs and finds the SSL certificate) and when I enter my credentials I can see a ping message from the Docker registry (so the registry is also running).
But when I try to login via Docker I get the following error:
vagrant#ubuntu-13:~$ docker login https://XX.XX.XX.XX
Username: XXX
Password:
Email:
2014/05/05 08:30:59 Error: Invalid Registry endpoint: Get https://XX.XX.XX.XX/v1/_ping: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs
I know this exception means that I have no IP address of the server in my certificate, but is it possible to use the Docker client and ignore the missing IP?
EDIT:
If I use a certificate with the IP of the server it works. But is there any chance to use a SSL certificate without the IP?
It's a Go issue. Actually it's a tech issue and go refused to follow the industry hack thus that's why it's not working. See this https://groups.google.com/forum/#!topic/golang-nuts/LjhVww0TQi4