How to configure logstash and filebeat SSL communication - logstash

The question:
Can someone help me figure out why I can't get filebeats to talk to logstash over TLS/SSL?
The Error:
I can get the filebeat and logstash to talk to eachover with TLS/SSL disabled, but when i enable it and use the settings/config below, I get the following error (observed in logstash.log):
{:timestamp=>"2016-10-28T17:21:44.445000+0100", :message=>"Pipeline aborted due to error",
:exception=>java.lang.NullPointerException, :backtrace=>["org.logstash.netty.PrivateKeyCo
nverter.generatePkcs8(org/logstash/netty/PrivateKeyConverter.java:43)", "org.logstash.nett
y.PrivateKeyConverter.convert(org/logstash/netty/PrivateKeyConverter.java:39)", "java.lang
.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "RUBY.create_server(/usr/share
/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.0.beta4-java/lib/logstash/
inputs/beats.rb:139)", "RUBY.register(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/log
stash-input-beats-3.1.0.beta4-java/lib/logstash/inputs/beats.rb:132)", "RUBY.start_inputs(
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:311)", "org.jruby.RubyArray.eac
h(org/jruby/RubyArray.java:1613)", "RUBY.start_inputs(/usr/share/logstash/logstash-core/li
b/logstash/pipeline.rb:310)", "RUBY.start_workers(/usr/share/logstash/logstash-core/lib/lo
gstash/pipeline.rb:187)", "RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/pipelin
e.rb:145)", "RUBY.start_pipeline(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:2
40)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:error}
{:timestamp=>"2016-10-28T17:21:47.452000+0100", :message=>"stopping pipeline", :id=>"main"
, :level=>:warn}
{:timestamp=>"2016-10-28T17:21:47.456000+0100", :message=>"An unexpected error occurred!",
:error=>#<NoMethodError: undefined method `stop' for nil:NilClass>, :backtrace=>["/us
r/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.0.beta4-java/lib/lo
gstash/inputs/beats.rb:173:in `stop'", "/usr/share/logstash/logstash-core/lib/logstash/inp
uts/base.rb:88:in `do_stop'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logst
ash/logstash-core/lib/logstash/pipeline.rb:366:in `shutdown'", "/usr/share/logstash/logsta
sh-core/lib/logstash/agent.rb:252:in `stop_pipeline'", "/usr/share/logstash/logstash-core/
lib/logstash/agent.rb:261:in `shutdown_pipelines'", "org/jruby/RubyHash.java:1342:in `each
'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:261:in `shutdown_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:123:in `shutdown'", "/usr/share/
logstash/logstash-core/lib/logstash/runner.rb:237:in `execute'", "/usr/share/logstash/vend
or/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logsta
sh/logstash-core/lib/logstash/runner.rb:157:in `run'", "/usr/share/logstash/vendor/bundle/
jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bo
otstrap/environment.rb:66:in `(root)'"], :level=>:fatal}
The Setup:
Servers
2 servers.
$> uname -a
Linux elkserver 3.10.0-327.36.2.el7.x86_64 #1 SMP Mon Oct 10 23:08:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$> cat /etc/*-release
CentOS Linux release 7.2.1511 (Core)
SELinux is Permissive (soz).
Firewalls are of. (mazza soz).
One server runs elasticsearch and logstash; one runs filebeat.
Elasticsearch
$> /usr/share/elasticsearch/bin/elasticsearch -version
Version: 2.4.1, Build: c67dc32/2016-09-27T18:57:55Z, JVM: 1.8.0_111
Logstash
$> /usr/share/logstash/bin/logstash -V
logstash 5.0.0-alpha5
Filbeat
$> /usr/share/filebeat/bin/filebeat -version
filebeat version 5.0.0 (amd64), libbeat 5.0.0
Config:
Logstash
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/filebeat-forwarder.crt"
ssl_key => "/etc/pki/tls/private/filebeat-forwarder.key"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Filebeat.yml
output:
logstash:
enabled: true
hosts:
- "<my ip address>:5044"
timeout: 15
tls:
certificate_authorities:
- /etc/pki/tls/certs/filebeat-forwarder.crt
filebeat:
prospectors:
-
paths:
- /var/log/syslog
- /var/log/auth.log
document_type: syslog
-
paths:
- /var/log/nginx/access.log
document_type: nginx-access
File: openssl_extras.cnf:
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
C = TG
ST = Togo
L = Lome
O = Private company
CN = *
[v3_req]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:TRUE
subjectAltName = #alt_names
[alt_names]
DNS.1 = *
DNS.2 = *.*
DNS.3 = *.*.*
DNS.4 = *.*.*.*
DNS.5 = *.*.*.*.*
DNS.6 = *.*.*.*.*.*
DNS.7 = *.*.*.*.*.*.*
IP.1 = <my ip address>
The command used to create the cert:
$> openssl req -subj '/CN=elkserver.system.local/' -config /etc/pki/tls/openssl_extras.cnf \
-x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/filebeat-forwarder.key \
-out /etc/pki/tls/certs/filebeat-forwarder.crt

In Filebeat 5.0 the tls configuration setting was changed to ssl to be consistent with the configuration setting used in Logstash and Elasticsearch. Try updating your Filebeat configuration.
References:
Securing Communication With Logstash by Using SSL
Breaking Changes in 5.0

Related

IXWebSocket wss c++ client cannot connect to Node.js wss server using an ip address

I have a IXWebSocket c++ wss client connecting to a Node.js wss server(websocket npm package). Everything is fine as long as the client connect to "wss://localhost:8080". Soon as I use the ip address of the Node.js wss server, I have the error "OpenSSL failed - error:0A000086:SSL routines::certificate verify failed"
Certificate chain creation
I created my own private root ca. I used those commands to generate root ca key/certificate and server key/certificate:
$ openssl genpkey -aes256 -out root-ca/private/ca.private.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
$ openssl req -config root-ca/root-ca.conf -key root-ca\private\ca.private.key -x509 -days 7500 -sha256 -extensions v3_ca -out root-ca\certs\ca.crt
$ openssl genpkey -out server/private/server.private.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
$ openssl req -key server\private\server.private.key -new -sha256 -out server\csr\server.csr
$ openssl ca -config root-ca\root-ca.conf -extensions server_cert -days 365 -notext -in server\csr\server.csr -out server\certs\server.crt
The configuration has a subjectAltName for both root and server and it looks like this :
[ca]
#\\root\\ca\\root-ca\\root-ca.conf
#see man ca
default_ca = CA_default
[CA_default]
dir = C:\\ca\\root-ca
certs = $dir\\certs
crl_dir = $dir\\crl
new_certs_dir = $dir\\newcerts
database = $dir\\index
serial = $dir\\serial
RANDFILE = $dir\\private\\.rand
private_key = $dir\\private\\ca.private.key
certificate = $dir\\certs\\ca.crt
crlnumber = $dir\\crlnumber
crl = $dir\\crl\\ca.crl
crl_extensions = crl_ext
default_crl_days = 30
default_md = sha256
name_opt = ca_default
cert_opt = ca_default
default_days = 365
preserve = no
policy = policy_loose
[ policy_strict ]
countryName = supplied
stateOrProvinceName = supplied
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ policy_loose ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ req ]
# Options for the req tool, man req.
default_bits = 2048
distinguished_name = req_distinguished_name
string_mask = utf8only
default_md = sha256
# Extension to add when the -x509 option is used.
x509_extensions = v3_ca
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
stateOrProvinceName = State or Province Name
localityName = Locality Name
0.organizationName = Organization Name
organizationalUnitName = Organizational Unit Name
commonName = Common Name
emailAddress = Email Address
countryName_default = CA
stateOrProvinceName_default = Qc
0.organizationName_default = Adacel
[ v3_ca ]
# Extensions to apply when createing root ca
# Extensions for a typical CA, man x509v3_config
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
subjectAltName = #alt_names
[ v3_intermediate_ca ]
# Extensions to apply when creating intermediate or sub-ca
# Extensions for a typical intermediate CA, same man as above
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
#pathlen:0 ensures no more sub-ca can be created below an intermediate
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
[ server_cert ]
# Extensions for server certificates
basicConstraints = CA:FALSE
nsCertType = server
nsComment = "OpenSSL Generated Server Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = localhost
IP.1 = 192.168.230.138
IP.2 = 127.0.0.1
The certificate chain looks valid between my root ca and my server:
$ openssl verify -CAfile root-ca\certs\ca.crt server\certs\server.crt
server\certs\server.crt: OK
Both ca.crt and server.crt have a reference to the wss server IP address, I used the subjectAltName parameter to defined it. I thought that my root ca would need it (I am not even sure that make sense to have domain on the root ca), but it doesn't make any difference.
What is not working
My IXWebSocket c++ client :
ix::initNetSystem();
ix::WebSocket webSocket;
ix::SocketTLSOptions tlsOptions;
tlsOptions.caFile = "ca.crt";
webSocket.setTLSOptions(tlsOptions);
std::string url("wss://localhost:8080");
//std::string url("wss://192.168.230.138:8080"); //Cannot connect to the ip
webSocket.setUrl(url);
What is working
wss javascript client:
I also coded a javascript client (using same npm package as my server, not ) and this little client can connect using the ip address!!
const options = {
ca: fs.readFileSync("./ca.crt"),
};
var client = new WebSocketClient();
client.on("connectFailed", function (error) {
console.log("Connect Error: " + error.toString());
});
client.on("connect", function (connection) {
console.log("WebSocket Client Connected");
connection.on("error", function (error) {
console.log("Connection Error: " + error.toString());
});
connection.on("close", function () {
console.log("echo-protocol Connection Closed");
});
connection.on("message", function (message) {
if (message.type === "utf8") {
console.log("Received: '" + message.utf8Data + "'");
}
});
});
client.connect("wss://192.168.230.138:8080/", null, null, null, options);
My Node.js server :
const httpsSignalServer = https.createServer(
{
key: fs.readFileSync("./server.private.key"),
cert: fs.readFileSync("./server.crt"),
},
(req, res) => {
console.log(`signal server : we have received the request ${req}`);
}
);
const signalWebsocket = new WebSockerServer.server({
httpServer: httpsSignalServer,
});
signalWebsocket.on("request", (request) => onRequest(request));
httpsSignalServer.listen(8080, () =>
console.log("---> My signal server is listening <---")
);
signalWebsocket.on("request", (request) => onRequest(request));
httpsSignalServer.listen(8080, () =>
console.log("---> My signal server is listening <---")
);
Questions :
Any idea why my c++ client cannot connect using an ip address to the server, while the javascript client can? (using the same certificate chain)
If no, any idea how I could debug this?
Would it be possible that the problem is a high level SSL stuff, where you actually need a real hostname and can't use an IP?

Getting MongooseServerSelectionError: Hostname/IP does not match certificate's altnames: IP: xxx.xx.xx.xx is not in the cert's list:

I have created self signed certificate in my linux machine where i have given certificate CN same as IP of that linux
I have added them in mongodb.conf and restarted the server
i am able to connecte via command
mongo --ssl --sslPEMKeyFile /etc/ssl/mongodbcerts/mongodb.pem --sslCAFile /etc/ssl/mongodbcerts/ca.pem
But when i am trying to connect from nodeJS mongoose i am getting error like
MongooseServerSelectionError: Hostname/IP does not match certificate's altnames: IP: XXX.xx.x.xx is not in the cert's list:
My nodejs code for connecting mongodb as follows
const connectionOptions = { useCreateIndex: true,
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false ,
server:{
ssl: true,
sslValidate:true,
sslCA: require('fs').readFileSync("/etc/ssl/mongodbcerts/ca.pem"),
sslKey:require('fs').readFileSync("/etc/ssl/mongodbcerts/mongodb.key"),
sslCert:require('fs').readFileSync("/etc/ssl/mongodbcerts/mongodb.crt")
}
};
let mongo_url="mongodb://username:password#IPaddress/DB"
console.log(mongo_url)
mongoose.connect(mongo_url,connectionOptions).then(() => console.log( 'Database Connected' ))
.catch(err => console.log( err ));;
Please let me know the error
I faced this issue recently. Starting in MongoDB 4.2, when performing comparison of SAN, MongoDB supports comparison of IP addresses as well. You can use IP address in the CN field but make sure your openssl configuration file contains your servers IP address in the alt_names section.
Here's a sample cnf file provided in the official MongoDB docs -
# NOT FOR PRODUCTION USE. OpenSSL configuration file for testing.
[ req ]
default_bits = 4096
default_keyfile = myTestServerCertificateKey.pem ## The default private key file name.
default_md = sha256
distinguished_name = req_dn
req_extensions = v3_req
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
nsComment = "OpenSSL Generated Certificate for TESTING only. NOT FOR PRODUCTION USE."
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = ##TODO: Enter the DNS names if using hostname, otherwise remove this line
IP.1 = ##TODO: Enter the IP address if using IP address
[ req_dn ]
countryName = Country Name (2 letter code)
countryName_default = TestServerCertificateCountry
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = TestServerCertificateState
stateOrProvinceName_max = 64
localityName = Locality Name (eg, city)
localityName_default = TestServerCertificateLocality
localityName_max = 64
organizationName = Organization Name (eg, company)
organizationName_default = TestServerCertificateOrg
organizationName_max = 64
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = TestServerCertificateOrgUnit
organizationalUnitName_max = 64
commonName = Common Name (eg, YOUR name)
commonName_max = 64
Mistake i did was I have created self signed certificate with Common Name(CN) as IP address (XXX.xx.x.xx) but we need to create the self signed certificates with CN as hostname.
To get the hostname open mongo shell and execute below command:
>getHostName()
You will hostname of that VM and create self signed certificates with same host name and try connect with mongoose nodejs. It will work.
Support Doc: https://mongoosejs.com/docs/tutorials/ssl.html

Wolkenkit fails to start with "Error: Failed to get lowest processed position."

I am currently looking into Wolkenkit by following the tutorial to create a chat application.
After finishing writing the code and I ran sudo yarn wolkenkit start. This gave me the following error message:
Waiting for https://localhost:3000/ to reply...
(node:11226) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
Error: Failed to get lowest processed position.
at EventSequencer.getLowestProcessedPosition (/wolkenkit/eventSequencer/EventSequencer.js:71:13)
at /wolkenkit/app.js:63:41
at process._tickCallback (internal/process/next_tick.js:68:7)
Application code caused runtime error.
✗ Failed to start the application.
A bit above the error the command warns about:
▻ Application certificate is self-signed.
I would appreciate any help on how to solve this and get the demo application to run on my local machine.
My development machine is running Debian GNU/Linux 10 with
Node 13.8.0
Yarn 1.21.1
Docker 18.09.1
Wolkenkit 3.1.2
Because of the warnings, I suspect this could be related to the X.509 certificate used for TLS. I created it using openssl like follows:
$ openssl req -new -sha256 -nodes -out localhost.csr -newkey rsa:2048 -keyout localhost.key -config <(
cat <<-EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C=US
ST=New York
L=Rochester
O=Somthing
OU=Something Else
emailAddress=test#example.com
CN = localhost
[ req_ext ]
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = localhost
EOF
)
$ openssl x509 -req -days 365 -in localhost.csr -signkey localhost.key -sha256 -out localhost.crt
Then I moved the localhost.crt and localhost.key into the following structure:
server/keys/localhost
├── certificate.pem
└── privateKey.pem
And set up a package.json like this:
{
"name": "chat",
"version": "0.0.0",
"wolkenkit": {
"application": "chat",
"runtime": {
"version": "3.1.0"
},
"environments": {
"default": {
"api": {
"address": {
"host": "localhost",
"port": 3000
},
"certificate": "/server/keys/localhost",
"allowAccessFrom": "*"
},
"fileStorage": {
"allowAccessFrom": "*"
},
"node": {
"environment": "development"
}
}
}
},
"dependencies": {
"wolkenkit": "^3.1.2"
}
}
Seems like this could be the same problem described here in this Github issue.
The problem is that due to a change in the start command, we now
assume that there must be a read model (which has not yet been
defined, if you follow the guide).
If you simply ignore this error, and follow on, the next thing is to
define the read model. Once you have done that, you can successfully
run wolkenkit start.

How to connect from node js to mongodb replica set using SSL

I am trying to connect to a mongodb replica set which is set up to authenticate clients using SSL. I can connect using the mongo shell, but for some reason cannot connect from node.js with the same keys.
I am using mongodb version 3.2.6 and node.js driver version 2.1.18, running on mac.
I followed this article, and was able to setup a cluster on my local machine by running the attached script:
# Prerequisites:
# a. Make sure you have MongoDB Enterprise installed.
# b. Make sure mongod/mongo are in the executable path
# c. Make sure no mongod running on 27017 port, or change the port below
# d. Run this script in a clean directory
##### Feel free to change following section values ####
# Changing this to include: country, province, city, company
dn_prefix="/C=CN/ST=GD/L=Shenzhen/O=MongoDB China"
ou_member="MyServers"
ou_client="MyClients"
mongodb_server_hosts=( "server1" "server2" "server3" )
mongodb_client_hosts=( "client1" "client2" )
mongodb_port=27017
# make a subdirectory for mongodb cluster
kill $(ps -ef | grep mongod | grep set509 | awk '{print $2}')
#rm -Rf db/*
mkdir -p db
echo "##### STEP 1: Generate root CA "
openssl genrsa -out root-ca.key 2048
# !!! In production you will want to use -aes256 to password protect the keys
# openssl genrsa -aes256 -out root-ca.key 2048
openssl req -new -x509 -days 3650 -key root-ca.key -out root-ca.crt -subj "$dn_prefix/CN=ROOTCA"
mkdir -p RootCA/ca.db.certs
echo "01" >> RootCA/ca.db.serial
touch RootCA/ca.db.index
echo $RANDOM >> RootCA/ca.db.rand
mv root-ca* RootCA/
echo "##### STEP 2: Create CA config"
# Generate CA config
cat >> root-ca.cfg <<EOF
[ RootCA ]
dir = ./RootCA
certs = \$dir/ca.db.certs
database = \$dir/ca.db.index
new_certs_dir = \$dir/ca.db.certs
certificate = \$dir/root-ca.crt
serial = \$dir/ca.db.serial
private_key = \$dir/root-ca.key
RANDFILE = \$dir/ca.db.rand
default_md = sha256
default_days = 365
default_crl_days= 30
email_in_dn = no
unique_subject = no
policy = policy_match
[ SigningCA ]
dir = ./SigningCA
certs = \$dir/ca.db.certs
database = \$dir/ca.db.index
new_certs_dir = \$dir/ca.db.certs
certificate = \$dir/signing-ca.crt
serial = \$dir/ca.db.serial
private_key = \$dir/signing-ca.key
RANDFILE = \$dir/ca.db.rand
default_md = sha256
default_days = 365
default_crl_days= 30
email_in_dn = no
unique_subject = no
policy = policy_match
[ policy_match ]
countryName = match
stateOrProvinceName = match
localityName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = CA:true
EOF
echo "##### STEP 3: Generate signing key"
# We do not use root key to sign certificate, instead we generate a signing key
openssl genrsa -out signing-ca.key 2048
# !!! In production you will want to use -aes256 to password protect the keys
# openssl genrsa -aes256 -out signing-ca.key 2048
openssl req -new -days 1460 -key signing-ca.key -out signing-ca.csr -subj "$dn_prefix/CN=CA-SIGNER"
openssl ca -batch -name RootCA -config root-ca.cfg -extensions v3_ca -out signing-ca.crt -infiles signing-ca.csr
mkdir -p SigningCA/ca.db.certs
echo "01" >> SigningCA/ca.db.serial
touch SigningCA/ca.db.index
# Should use a better source of random here..
echo $RANDOM >> SigningCA/ca.db.rand
mv signing-ca* SigningCA/
# Create root-ca.pem
cat RootCA/root-ca.crt SigningCA/signing-ca.crt > root-ca.pem
echo "##### STEP 4: Create server certificates"
# Now create & sign keys for each mongod server
# Pay attention to the OU part of the subject in "openssl req" command
# You may want to use FQDNs instead of short hostname
for host in "${mongodb_server_hosts[#]}"; do
echo "Generating key for $host"
openssl genrsa -out ${host}.key 2048
openssl req -new -days 365 -key ${host}.key -out ${host}.csr -subj "$dn_prefix/OU=$ou_member/CN=${host}"
openssl ca -batch -name SigningCA -config root-ca.cfg -out ${host}.crt -infiles ${host}.csr
cat ${host}.crt ${host}.key > ${host}.pem
done
echo "##### STEP 5: Create client certificates"
# Now create & sign keys for each client
# Pay attention to the OU part of the subject in "openssl req" command
for host in "${mongodb_client_hosts[#]}"; do
echo "Generating key for $host"
openssl genrsa -out ${host}.key 2048
openssl req -new -days 365 -key ${host}.key -out ${host}.csr -subj "$dn_prefix/OU=$ou_client/CN=${host}"
openssl ca -batch -name SigningCA -config root-ca.cfg -out ${host}.crt -infiles ${host}.csr
cat ${host}.crt ${host}.key > ${host}.pem
done
echo ""
echo "##### STEP 6: Start up replicaset in non-auth mode"
mport=$mongodb_port
for host in "${mongodb_server_hosts[#]}"; do
echo "Starting server $host in non-auth mode"
mkdir -p ./db/${host}
mongod --replSet set509 --port $mport --dbpath ./db/$host \
--fork --logpath ./db/${host}.log
let "mport++"
done
sleep 3
# obtain the subject from the client key:
client_subject=`openssl x509 -in ${mongodb_client_hosts[0]}.pem -inform PEM -subject -nameopt RFC2253 | grep subject | awk '{sub("subject= ",""); print}'`
echo "##### STEP 7: setup replicaset & initial user role\n"
myhostname=`hostname`
cat > setup_auth.js <<EOF
rs.initiate();
mport=$mongodb_port;
mport++;
rs.add("$myhostname:" + mport);
mport++;
rs.add("$myhostname:" + mport);
sleep(5000);
db.getSiblingDB("\$external").runCommand(
{
createUser: "$client_subject",
roles: [
{ role: "readWrite", db: 'test' },
{ role: "userAdminAnyDatabase", db: "admin" },
{ role: "clusterAdmin", db:"admin"}
],
writeConcern: { w: "majority" , wtimeout: 5000 }
}
);
EOF
cat setup_auth.js
mongo localhost:$mongodb_port setup_auth.js
kill $(ps -ef | grep mongod | grep set509 | awk '{print $2}')
sleep 3
echo "##### STEP 8: Restart replicaset in x.509 mode\n"
mport=$mongodb_port
for host in "${mongodb_server_hosts[#]}"; do
echo "Starting server $host"
mongod --replSet set509 --port $mport --dbpath ./db/$host \
--sslMode requireSSL --clusterAuthMode x509 --sslCAFile root-ca.pem \
--sslAllowInvalidHostnames --fork --logpath ./db/${host}.log \
--sslPEMKeyFile ${host}.pem --sslClusterFile ${host}.pem
let "mport++"
done
# echo "##### STEP 9: Connecting to replicaset using certificate\n"
cat > do_login.js <<EOF
db.getSiblingDB("\$external").auth(
{
mechanism: "MONGODB-X509",
user: "$client_subject"
}
)
EOF
# mongo --ssl --sslPEMKeyFile client1.pem --sslCAFile root-ca.pem --sslAllowInvalidHostnames --shell do_login.js
After running the cluster, I am able to connect to it using the mongo shell with this command (all keys\certs were generated in ./ssl dir):
mongo --ssl --sslPEMKeyFile ssl/client1.pem --sslCAFile ssl/root-ca.pem --sslAllowInvalidHostnames
and authenticate as follows:
db.getSiblingDB("$external").auth(
{
mechanism: "MONGODB-X509",
user: "CN=client1,OU=MyClients,O=MongoDB China,L=Shenzhen,ST=GD,C=CN"
}
)
When I try to connect from node.js I keep failing. I am running the following code to connect to mongo using the native mongo driver:
'use strict';
const mongodb = require('mongodb');
const P = require('bluebird');
const fs = require('fs');
function connect_mongodb() {
let user = 'CN=client1,OU=MyClients,O=MongoDB China,L=Shenzhen,ST=GD,C=CN';
let uri = `mongodb://${encodeURIComponent(user)}#localhost:27017,localhost:27018,localhost:27019/test?replicaSet=set509&authMechanism=MONGODB-X509&ssl=true`;
var ca = [fs.readFileSync("./ssl/root-ca.pem")];
var cert = fs.readFileSync("./ssl/client1.pem");
var key = fs.readFileSync("./ssl/client1.pem");
let options = {
promiseLibrary: P,
server: {
ssl: true,
sslValidate: false,
checkServerIdentity: false,
sslCA: ca,
sslKey: key,
sslCert: cert,
},
replset: {
sslValidate: false,
checkServerIdentity: false,
ssl: true,
sslCA: ca,
sslKey: key,
sslCert: cert,
}
};
return mongodb.MongoClient.connect(uri, options);
}
connect_mongodb();
When running the script I get the following error:
Unhandled rejection MongoError: no valid seed servers in list
When checking the mongodb log I see these errors:
2017-01-17T22:48:54.191+0200 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63881 #99 (5 connections now open)
2017-01-17T22:48:54.207+0200 E NETWORK [conn99] no SSL certificate provided by peer; connection rejected
2017-01-17T22:48:54.207+0200 I NETWORK [conn99] end connection 127.0.0.1:63881 (4 connections now open)
I was trying different options described here, but with no success.
Thanks for the help
Upgrading to mongodb node js driver 2.2.22 solved the problem

Logstash failed to send join request to elasticsearch master

I have a problem with my logstash that can't send log to elaticsearch.
With following details
Logstash version : 1.5.1
Elasticsearch version : 1.6.0
jvm on both servers version : 1.8.0
Linux 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Azure Openlogic 7.1
Here is my logstash.err file
INFO: [ls1] failed to send join request to master
[[es1][e8A0li5pRfeMklozmDXgkQ][elastic][inet[/x.x.x.x:9300]]], reason
[RemoteTransportException[[es1][inet[/x.x.x.x:9300]]
[internal:discovery/zen/join]]; nested:
ConnectTransportException[[ls1][inet[/x.x.x.x:9300]]
connect_timeout[30s]]; nested: ConnectTimeoutException[connection
timed out: /x.x.x.x:9300]; ]
My logstash configuration output
output {
elasticsearch {
host => "x.x.x.x"
bind_port => 9300
index => "syslog"
cluster => "test-cluster"
node_name => 'ls1'
}
stdout {
codec => rubydebug
}
}
Here is my elasticsearch.yml configuration file in elasticsearch server
cluster.name: test-cluster
node.name: "es1"
network.bind_host: 0.0.0.0
network.publish_host: <my_elasticsearch_public_ip>
transport.tcp.port: 9300
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["my_logstash_public_ip:9300"]
Here is my elasticsearch.yml file in logstash server (/var/lib/logstash)
network.publish_host: my_logstash_public_ip
discovery.zen.ping.multicast.enabled: false
I've allowed port 9300 on both servers.
You need to include protocol attribute in your logstash configuration. find the below updated code.
output {
elasticsearch {
host => "x.x.x.x"
protocol => "http"
bind_port => 9300
index => "syslog"
cluster => "test-cluster"
node_name => 'ls1'
}
I use Microsoft Azure VM, Now I can solve this problem by create VPN connection via azure virtual machines.

Resources