I want to figure out a safe way to run logstash-forwarder respectively logstash with the lumberjack-input in an untrusted network-environment.
As far as I understand, the SSL-certificate ensures an encrypted connection between client and server und authenticates the server for the client (as in "ok, I know this server is the real logging-server"). How can I authenticate the client for the server (as in "ok, I know this client trying to send me events is one of my machines, not someone else")?
SSL certificates can work in bidirectional way. They can be used to authenticate the server ("ok, this server is the real logging-server") and also the other way around ("ok, I know this client is one of my machines"). For the second case you need to use client certificates.
Although Logstash Forwarder allows to configure a client certificate, logstash's lumberjack input does not support client certs. There is an open github issue regarding this feature.
To overcome this dilemma you can use an alternative log client and logstash's TCP input which supports client certs. The input will look like this:
input {
tcp {
port => 9999
ssl_cert => "/path/to/server.crt"
ssl_key => "/path/to/server.key"
ssl_cacert => "/path/to/ca.crt"
ssl_enable => true
ssl_verify => true
}
}
On the client side you can use several tools. I personally do this with NXLog. A proper NXLog output config would look like this:
<Output logstash>
Module om_ssl
Host yourhost
Port 9999
CAFile %CERTDIR%/ca.crt
CertFile %CERTDIR%/client.crt
CertKeyFile %CERTDIR%/client.key
</Output>
Unfortunately this is just a workaround with another software but I'm afraid there is no native lumberjack solution.
Related
I am trying to implement SSL on my nodejs project. Currently, my servers are split between a client side server running on localhost port 443 and a backend server running on localhost port 5000. I have already added a self-signed SSL certificate by openSSL to my client side server as shown below.
Now here's my issue. When I send a post request to login, from what I understand, a handshake is suppose to happen between the server and the client to make a secure connection. However, that's not the case. When I used Wireshark the intercept the packets, there is no handshake happening in the process.
I am currently not sure on how to proceed because I have limited knowledge on this kind of security topics. Do I need to sign a new key and cert and add it to my backend server? Or am I doing everything wrong? If so, can I get a source or guide on how to properly create one for a nodejs server?
you have many options here for securing your backend server :
first, you can use Nginx reverse proxy server and you can add ssl/tls logic to it. nginx will handle this stuff for you.
second, you can use [https][1] package directly and pass your SSL certificate and key to it :
const https = require('https');
const fs = require('fs');
const options = {
key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
};
https.createServer(options, (req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
remember that the domain name your are trying to access must be set in your host ip.
[1]: https://nodejs.org/api/https.html
Short background: If we go back in time to about 2006-ish: We (ie: my company) used a java client app embedded in the browser that connected via port 443 to a C program backend running on port 8068 on an in-house server. At the time when the java app was first developed, port 443 was the only port that we knew would not be blocked by our customers that used the software (ease of installation and possibly the customer in-house staff didn't have the power or knowledge to control their internal firewall).
Fast-forward to 2016, and I'm hired to help develop a NodeJS/Javascript version of that Java app. The Java app continues to be used during development of its replacement, but whoops - we learn that browsers will drop support for embedded Java in the near future. So we switch to Java Web Start, so that the customers can continue to download the app and it still connects to the in house server with it's port 443->8068 routing.
2017 rolls around and don't you know, we can't use the up-coming JS web-app with HTTPS/SSL and the Java app at the same time, 'cause they use the same port. "Ok let's use NGINX to solve the problem." But due to in house politics, customer needs, and a turn-over of web-developer staff, we never get around to truly making that work.
So here we are at 2020, ready to deploy the new web version of the client software, and the whole 443 mess rears it's ugly head again.
Essentially I am looking to allow (for the time being) the Java app to continue using 443, but now need to let the web app use HTTPS too. Back in 2017/2018 we Googled ways to let them cohabitate through NGINX, but we never really got them to work properly, or the examples and tutorials were incomplete or confusing. It seemed like we needed to either use streaming along the lines of https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ , or look at the incoming HTTPS header and do an 'if (https) { route to nodeJS server } else { assume it must be the java app and route to port 8068 }' -sort of arrangement inside the NGINX config file.
Past Googled links appear to not exist anymore, so if anyone knows of an NGINX configuration that allows an HTTPS website to hand off to a non-SSL application that still needs to use 443, I would greatly appreciate it. And any docs and/or tutorials that point us in the right direction would be helpful too. Thanks in advance!
You can do this using ssl_preread option. Basically, this option will allow access to the variable $ssl_preread_protocol, that contains the protocol negotiated at SSL port. If no valid protocol was detected, the variable will be empty.
Using this parameters, you could use the follow configuration to your environment:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server __your_node_js_server_ip__:443;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
In your case, this configuration will pass the connection directly to your nodejs and java backend servers, so, nodejs will need to negotiate the SSL. You can pass this work to NGiNX using another server context, like:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server 127.0.0.1:444;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 444 ssl;
__your_ssl_cert_configurations_here__
location / {
proxy_pass http://__your_nodejs_server_ip__:80;
}
}
}
You'll need NGiNX at least version 1.15.2 to this configuration to work, and compiled with ngx_stream_ssl_preread_module module (need to compile with --with-stream_ssl_preread_module configuration parameter, because this module is not built by default).
Source: https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
I'm trying to write a simple Node.js server applicationthat will accept client requests, and allowing me to change the TLS/SSL protocol to use. It works fine with a browser (Firefox).
However, when I call the Node.js server from WebSphere Liberty Profile, no matter which TLS/SSL protocol I try to use, I am getting the very confusing error message:
[ERROR ] IOException invoking https://dlwester:32080/W3CookieServiceEmulator/workplace/services/w3cookie/callback/auth_data: HTTPS hostname wrong: should be <dlwester>
As you can see, it's telling me I'm using the wrong hostname, but the hostname it's telling me I should be using is what I'm already using. I've even tried using port 443, so that I don't need to specify a port, but it still gives me the same error message.
I'm not sure if the error is with Node.js or my WLP code (using JAX-RS client). I've not found a way in Node.js to bypass verifying the hostname.
var options = {
key: 'my.key',
cert: 'my.cert',
ciphers: 'TLSv1.2,TLSv1.1,TLSv1.0,SSLv3',
honorCipherOrder: true,
rejectUnauthorized: false
}
server = https.createServer(options, requestListener);
So I guess that's my first question - can I bypass hostname verification?
Has anyone else run into this error, and know a way to get around it?
This is the client verifying the hostname, not the server. You never mentioned the hostname used in your certificate -- if it doesn't match the hostname you use to address it from the client: fix the certificate.
I'm working on developing a solution using MQTT to send/receive data to embedded systems. For a broker I'm using Mosquitto. For the client I'm using Node.js MQTT.
I need to encrypt the data and I'd like to use the pre-shared key option in mosquitto to accomplish this however, I can't seem to find anything built into the Node.js MQTT package to do this. Is this possible?
From the Mosquitto configuration docs:
When using pre-shared-key based encryption through the psk_hint and
psk_file options, the client must provide a valid identity and key in
order to connect to the broker before any MQTT communication takes
place. If use_identity_as_username is true, the PSK identity is used
instead of the MQTT username for access control purposes. If
use_identity_as_username is false, the client may still authenticate
using the MQTT username/password if using the password_file option.
Node does support TLS-PSK now, but PSK ciphers are disabled by default.
I finally could connect with the following options:
const client = mqtt.connect('mqtts://localhost:8883', {
pskCallback: (hint) => {
console.log('psk_hint configured in mosquitto.conf', hint);
return {
psk: Buffer.from('1234', 'hex'),
identity: 'DeviceId',
};
},
ciphers: crypto.constants.defaultCipherList.replace(':!PSK', ''),
});
psk_file must include the line DeviceId:1234 in this example.
My main problem was, that configuring a custom ciphers list must include HIGH for whatever reason. It even works with ciphers: 'HIGH'
It appears the MQTT package hands off to Node's TLS capabilities and Node doesn't support TLS PSK.
Preshared keys (TLS-PSK-WITH-AES-256-CBC-SHA) with node.js server
I have created a TLS server and an appropriate TLS client in Node.js. Obviously they both work with each other, but I would like to verify it.
Basically, I think of something such as inspecting the connection, or manually connecting to the server and inspecting what it sends, or something like that ...
The relevant code of the server is:
var tlsOptions = {
key: fs.readFileSync('key.pem'),
cert: fs.readFileSync('server.pem')
};
tls.createServer(tlsOptions, function (tlsConnection) {
var d = dnode({
// [...]
});
tlsConnection.pipe(d).pipe(tlsConnection);
}).listen(3000);
The appropriate client code is:
var d = dnode();
d.on('remote', function (remote) {
// [...]
});
var tlsConnection = tls.connect({
host: '192.168.178.31',
port: 3000
});
tlsConnection.pipe(d).pipe(tlsConnection);
How could I do that?
Wireshark will tell you if the data is TLS encrypted, but it will not tell you if the connection is actually secure against Man-in-the-Middle attacks. For this, you need to test if your client refuses to connect to a server that provides a certificate not signed by a trusted CA, a certificate only valid for a different host name, a certificate not valid anymore, a revoked certificate, ...
If your server.pem is not a certificate from a real/trusted CA, and your client doesn't refuse to connect to the server (and you didn't explicitly provide server.pem to the client), then your client is very probably insecure. Given that you are connecting to an IP, not a host name, no trusted CA should have issued a certificate for it, so I assume you use a selfsigned one and are vulnerable. You probably need to specify rejectUnauthorized when connect()ing. (Rant: As this is a pretty common mistake, I think it is extremely irresponsible to make no verification the default.)
Basically, I think of something such as inspecting the connection, or manually connecting to the server and inspecting what it sends, or something like that ...
You can use tools such as Wireshark to see the data they are transmitting.