I have the following mosquitto.conf but when internet goes out, it does not buffer and send to AWS IoT when internet comes back.
Questions:
What have I done wrong with the mosquitto config that the offline buffering is not working as expected?
I am thinking of writing my own bridge on NodeJS. Any recommendation of NodeJS MQTT library that supports offline buffering?
Thank you!
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
# =================================================================
# Bridges to AWS IOT
# =================================================================
# AWS IoT endpoint, use AWS CLI 'aws iot describe-endpoint'
connection awsiot
address aws.iot.us-west-2.amazonaws.com:8883
# Specifying which topics are bridged
topic outTopic out 1
# Setting protocol version explicitly
bridge_protocol_version mqttv311
bridge_insecure false
# Bridge connection name and MQTT client Id,
# enabling the connection automatically when the broker starts.
cleansession true
clientid bridgeawsiot
start_type automatic
notifications false
log_type all
cafile /home/pi/ca.crt
keyfile /home/pi/server.key
certfile /home/pi/server.crt
tls_version tlsv1
# =================================================================
# Certificate based SSL/TLS support
# -----------------------------------------------------------------
#Path to the rootCA
bridge_cafile /home/pi/rootCA.cer
# Path to the PEM encoded client certificate
bridge_certfile /home/pi/bridge.cert.pem
# Path to the PEM encoded client private key
bridge_keyfile /home/pi/bridge.private.key
The cleansession true in your bridge config will mean that no messages are queued when the bridge is down.
Related
Hello fellow developers ! I'm stuck in a corner case and I'm starting to be out of hairs to pull... Here is the plot :
load-balancer.example.com:443 (TCP passthrough)
/\
/ \
/ \
/ \
s1.example.com:443 s2.example.com:443
(SSL/SNI) (SSL/SNI)
The goal is to stress-test the upstreams s1 and s2 directly using aiohttp with certificate-validation enable. Since the load-balancer does not belong to me I don't want to do the stress-test over it.
the code is not supposed to run on other platforms than GNU Linux with at least Python-v3.7 (but I can use any recent version if needed)
all servers serve a valid certificate for load-balancer.example.com
openssl validates the certificate from the upstreams when using openssl s_connect s1.example.com:443 -servername load-balancer.example.com
cURL needs curl 'https://load-balancer.example.com/' --resolve s1.example.com:443:load-balancer.example.com and also validates successfully
I am able to launch a huge batch of async ClientSession.get requests on both upstreams in parallel but for each request I need to somehow tell asyncio or aiohttp to use load-balancer.example.com as server_hostname, otherwise the SSL handshake fails.
Is there an easy way to setup the ClientSession to use a specific server_hostname when setting up the SSL socket ?
Does someone have already done something like that ?
EDIT : here is the most simple snippet with just a single request :
import aiohttp
import asyncio
async def main_async(host, port, uri, params=[], headers={}, sni_hostname=None):
if sni_hostname is not None:
print('Setting SNI server_name field ')
#
# THIS IS WHERE I DON'T KNOW HOW TO TELL aiohttp
# TO SET THE server_name FIELD TO sni_hostname
# IN THE SSL SOCKET BEFORE PERFORMING THE SSL HANDSHAKE
#
try:
async with aiohttp.ClientSession(raise_for_status=True) as session:
async with session.get(f'https://{host}:{port}/{uri}', params=params, headers=headers) as r:
body = await r.read()
print(body)
except Exception as e:
print(f'Exception while requesting ({e}) ')
if __name__ == "__main__":
asyncio.run(main_async(host='s1.example.com', port=443,
uri='/api/some/endpoint',
params={'apikey': '0123456789'},
headers={'Host': 'load-balancer.example.com'},
sni_hostname='load-balancer.example.com'))
When running it with real hosts, it throws
Cannot connect to host s1.example.com:443 ssl:True
[SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] '
certificate verify failed: certificate has expired (_ssl.c:1131)')])
Note that the error certificate has expired indicates that the certificate proposed to the client is the default certificate since the SNI hostname is s1.example.com which is unknow by the webserver running there.
When running it against the load-balancer it works just fine, the SSL handshake happens with the upstreams which serve the certificate and everything is valid.
Also note that
sni_callback does not help since it is called after the handshake has started and the certificate was received (and at this point server_hostname is a read-only property anyway)
it does not seem to be possible to set server_hostname when creating an SSLContext allthough SSLContext.wrap_socket does support server_hostname but I was not able to make that work
I hope someone knows how to fill the comment block in that snippet ;-]
I need to connect to IBM MQ on cloud which is SSL enabled with mutual authentication from NodeJS. Because of some restrictions at MQ side I am unable to connect to this IBM MQ using Native keydb approach. When i try to connect i get error. My client confirmed that I wont be able to connect to MQ using keydb(native approach)
MQ call failed in CONNX: MQCC = MQCC_FAILED [2] MQRC = MQRC_HOST_NOT_AVAILABLE [2538]
I saw official documentation of ibmmq node library which mentions that MQI based client like Node, Python etc needs to use keydb. I am able to connect to this IBM MQ using Java (Keystore).
I would like to know if there is a way to connect to IBM MQ from NodeJS using keystore which i used to connect from Java.
You can't directly use the jks file with the ibmmq node library.
You can convert the jks to a kdb using these commands:
runmqckm -keydb -convert -db key.jks -new_format kdb
runmqckm -keydb -stashpw -db key.kdb
The first command will create two files:
key.kdb
key.rdb
The second command will create the stash file:
key.sth
Both commands will prompt you for the jks password.
From Node.js application I am trying to connect to Apache Kafka broker using node-rdkafka client .Since the kafka broker lists are SSL enabled hence configuring the node-rdkafka producer with ssl options as below :
I have already tried with different valid certificates and keys tried adding CA also using the option ssl.ca.certificate:<CA-location>
but still no luck.
Searched in librdkafka github page and found one similar issue where is the proposed solution was to use api.verison.request:false , tried this also but no luck Still getting the same error
Error: broker transport failure
Tried using another kafka client named no-kafka with the same ssl certificate and keys to connect to the same broker lists and able to establish the connection .
We have to use node-rdkafka only.
The producer configuration using node-rdkafka :
var producer = new Kafka.Producer({
'debug':'All',
'metadata.broker.list': 'comma separated list of ssl enabled broker hosts and port',
'dr_cb': true,
'security.protocol': 'ssl',
'ssl.certificate.location': path.join(__dirname, 'server.crt'),
'ssl.key.location': path.join(__dirname, 'server.key'),
'ssl.ca.location' : path.join(__dirname,'DigiCertSHA2SecureServerCA-int.cer'),
});
I expect a SUCCESS MESSAGE saying connection set up, but the actual result is Error : Broker transport Failure
User who has authorized TLS certificate only able to connect to Open-sip server from application (Android and iOS).
What we need to change in config file for only TLS connection to Open-sip server.
You can configure the TLS certificate information in opensips.cfg file
tls_certificate="/usr/local/etc/opensips/tls/glob/glob-cert.pem"
tls_private_key="/usr/local/etc/opensips/tls/glob/glob-privkey.pem"
tls_ca_list="/usr/local/etc/opensips/tls/glob/glob-calist.pem"
## turn on the strictest and strongest authentication possible
tls_verify_client = 1
tls_require_client_certificate = 1
tls_method = TLSv1
tls_verify_client = 1 will ensure the client with authorized certificate configured in tls_ca_list file
Can you try uncommenting the line of startTLS from config file and make it true as a value?
It should work!
Also make sure that your Android and iOS clients are configured to accept TLS connections(though most of the time it's default behaviour).
I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.