Connecting Mosquitto to the new Azure MQTT backend - azure

Recently Microsoft Azure has added a MQTT backend to its' services.
This service uses TLS do encrypt its traffic.
I can't connect between Mosquitto and the Microsoft Azure Cloud.
I downloaded the server certificate with
echo -n | openssl s_client -connect mytarget.azure-devices.net:8883 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/test.cert
And then tried to connect with mosquitto_sub
mosquitto_sub -h mytarget.azure-devices.net -p 8883 -d -t devices/Device1/messages/events -i Device1 -u "mytarget.azure-devices.net/Device1" -P "SharedAccessSignature sr=snip&sig=snip&skn=snip" --cafile /tmp/test.pem --insecure
However, the connection is never built.
Mosquitto outputs:
Client Device1 sending CONNECT
Error: A TLS error occurred.
I have previously successfully connected mosquitto over ssl to the Amazon cloud (although I got a certificate and Private Key for that).
So I tried with adding client certificate/key, which I got from AWS, hopingg the error is that mosquitto does need those files too.
mosquitto_sub -h mytarget.azure-devices.net -p 8883 -d -t devices/Device1/messages/events -i Device1 -u "mytarget.azure-devices.net/Device1" -P "SharedAccessSignature sr=snip&sig=snip&skn=snip" --cafile /tmp/test.pem --cert certificate.pem.crt --key -private.pem.key --insecure --insecure
However, this didn't help and didn't change the error message.
I then looked in to the mosquitto code at github and found that the error is probably caused on this line by SSL_connect, which seems to be a openssl function.
Has anybody made mosquitto connect to the Microsoft Azure cloud or has any pointers where to look next?
edit:
I seem to be able to publish by tunneling the SSL over socat:
socat openssl-connect:mytarget.azure-dices.net:8883,verify=0 tcp-l:8884,reuseaddr,fork
And then connection on mosquitto to -h localhost instead of azure gets me:
Client Device1 sending CONNECT
Client Device1 received CONNACK
Client Device1 sending PUBLISH (d0, q0, r0, m1, 'devices/Device1/messages/events', ... (4 bytes))
Client Device1 sending DISCONNECT
It might be that something from the Azure Host is throwing of mosquitto.
Subscribing like this with mosquitto also works.
The problem with this approach is that the ssl-connection seems to be destroyed after the first (few) packet(s) and socat subsequentally complains with
E SSL_write(): Broken pipe

For anyone else searching for this.
We finally managed to get it working with mosquitto_sub/pub:
mosquitto_sub -h mytarget.azure-devices.net -p 8883 -t "devices/Device1/messages/devicebound/#" -i Device1 -u "mytarget.azure-devices.net/Device1" -P "SharedAccessSignature sr=mytarget.azure-devices.net&sig=snip&skn=snip" --capath /etc/ssl/certs/ --tls-version tlsv1 -d -V mqttv311 -q 1
and for publishing:
mosquitto_pub -h mytarget.azure-devices.net -p 8883 -t "devices/Device1/messages/events/" -i Device2 -u "mytarget.azure-devices.net/Device2" -P "SharedAccessSignature sr=bbvgathering.azure-devices.net&sig=snip&se=snip&skn=snip" --capath /etc/ssl/certs/ --tls-version tlsv1 -d -V mqttv311 -q 1 -m "{\"key\": \"value\"}"
Important You have to send JSON-Data, everything else will get rejected (at least on our setup)!
Note Be adviced that you (seemingly) can't directly send from one device to the other. As this is contra the Cloud way.
You'll have to configure a Connection in the cloud

Related

Azure IoT hub and receiving anything

I'm trying to use Azure Iot hub for publishing and subscribing messages. At the moment I'm trying to publish some simple message with following command:
mosquitto_pub \
-h xxxdev.azure-devices.net \
-u "xxxdev.azure-devices.net/xxxdev/?api-version=2018-06-30" \
-P "SharedAccessSignature sr=xxx.azure-
devices.net%2Fdevices%2Fxxxdev&sig=YYYYY&se=1570866689&skn=ZZZZZZZ" \
-t "devices/ublox1/messages/events/" \
--cafile ca.pem \
-p 8883 \
-i xxxdev \
-V mqttv311 \
-d \
-m 'message'
and subscribe with this one:
mosquitto_sub \
-h xxxdev.azure-devices.net \
-u "xxxdev.azure-devices.net/ublox1" \
-P "SharedAccessSignature sr=xxxdev.azure-
devices.net%2Fdevices%2Fublox1&sig=YYYYY&se=1607025033"
-t "devices/ublox1/messages/events/" \
-i xxxdev \
-V mqttv311 \
-p 8883 \
--cafile ca.pem \
-v -d
but I cannot receive any of published messages.
Here is what the output of the subscribe side:
Client xxxdev sending CONNECT
Client xxxdev received CONNACK (0)
Client xxxdev sending SUBSCRIBE (Mid: 1, Topic: topic/, QoS: 0, Options: 0x00)
Client xxxdev received SUBACK
Subscribed (mid: 1): 0
and that is all. No PUBLISH messages I am able to receive on subscribe side.
My question is: what can be the reason that I cannot receive anything on subscriber side?
For testing purpose I run Visual Studio Code and run "Monitoring built-in event endpoint" - and it correctly shows my published messages - so what is going on? Why Visual Studio Code is able to shows my messages but mosquitto cannot?
Azure IoT Hub is not a full-blown MQTT server/broker. In order to subscribe to telemetry messages coming from the devices, you need to use the built-in event grid endpoint. You can however use MQTT to subscribe to "cloud-to-device" messages, calls to direct methods, or device twin updates.

Azure IoT hub and sending messages with mosquitto_pub

I'm trying to send some simple message with mosquitto_pub to Azure IoT HUB but faced some problems with authorization. I'm using following script:
mosquitto_pub \
-h xxxdev.azure-devices.net \
-u "xxxdev.azure-devices.net/xxxdev/?api-version=2018-06-30" \
-P "SharedAccessSignature sr=xxx.azure-
devices.net%2Fdevices%2Fxxxdev&sig=YYYYY&se=1570866689&skn=ZZZZZZZ" \
-t "devices/xxxdev/messages/events/" \
--cafile ca.pem \
-p 8883 \
-i xxxdev \
-V mqttv311 \
-d \
-m 'message'
and after run this script I get following messages:
Client xxxdev sending CONNECT
Client xxxdev received CONNACK (5)
Connection error: Connection Refused: not authorised.
Client xxxdev sending DISCONNECT
My questions are: What exactly does those messages mean? Is it because some parameter like password (given with -P param) is wrong?
I've generated SAS token with bash script: https://learn.microsoft.com/en-us/rest/api/eventhub/generate-sas-token
Assuming that this bash script generates properly the password - what else could be the problem here? How to fix the problem?

OpenLDAP Local configuration for Application Authentication

I have installed openLDAP on a Centos 7 server that is already running FreeIPA for user authentication. http://www.tecmint.com/setup-ldap-server-and-configure-client-authentication
The purpose of openLDAP is for a Nodejs application to manage users for the app. and will be running on separate server.
I can see that slapd is running (ps -ef | grep slapd):
ldap 1287 1 0 06:40 ? 00:00:00 /usr/sbin/slapd -u ldap -h ldapi:/// ldap:///
So I was trying to change the defaults using the ldapadd command and I suspect to be connecting to the FreeIPA LDAP that is configured on the box (on some coammands using -x -h it is asking for a password which hasn't been set yet):
sudo ldapadd -H ldapi:/// -f ldaprootpasswd.ldif
SASL/GSS-SPNEGO authentication started
ldap_sasl_interactive_bind_s: Local error (-2)
additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (SPNEGO cannot find mechanisms to negotiate)
If I run an ldapsearch then I seem to be able to connect to openLDAP:
sudo ldapsearch -H ldapi:/// -Y EXTERNAL -b "cn=config" "(olcRootDN=*)" olcSuffix olcRootDN olcRootPW -LLL -Q
dn: olcDatabase={2}hdb,cn=config
olcSuffix: dc=my-domain,dc=com
olcRootDN: cn=Manager,dc=my-domain,dc=co
I thought maybe that I could connect externally using a Windows LDAP tool but I get a connection error. I did confirm that the port is open and visible externally.
nmap -p 389 10.18.16.243
Starting Nmap 7.12 ( https://nmap.org ) at 2016-09-28 11:25 GMT Daylight Time
Nmap scan report for 10.18.16.243
Host is up (0.00s latency).
PORT STATE SERVICE
389/tcp filtered ldap
MAC Address: BB:BB:BB:BB:BB:00 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 19.92 seconds
I tried using -h instead of -H:
sudo ldapadd -a -x -h localhost -p 389 -D cn=Manager,dc=my-domain,dc=com -W -f ldaprootpasswd.ldif
This prompts me for a password but I have only just installed openLDAP and not set a password yet (olcRootPW is in the ldif file I am trying to apply).
Does anyone have experience with openLDAP for user authentication or have any ideas what config needs changing to get this up an running?
The secret incantation was:
sudo ldapmodify -a -Q -Y EXTERNAL -H ldapi:/// -f ldaprootpasswd.ldif
Since "-a" forces add new entries when using ldapmodify this would be the same as above:
sudo ldapadd -Q -Y EXTERNAL -H ldapi:/// -f ldaprootpasswd.ldif
"-Q" -- Enable SASL Quiet mode. Never prompt.
"-Y" -- Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows.

Kubernetes: VPN server and DNS issues

I spinned a docker-openvpn container in my (local) Kubernetes cluster to access my Services securely and debug dependent services locally.
I can connect to the cluster via the openVPN server. However I can't resolve my Services via DNS.
I managed to get to the point where after setting routes on the VPN server:
I can ping a Pod by IP (subnet 10.2.0.0/16)
I can ping a Service by IP (subnet 10.3.0.0/16 like the DNS which is at 10.3.0.10)
I can curl to a Services by IP and get the data I need.
but when i nslookup kubernetes or any Service, I get:
nslookup kubernetes
;; Got recursion not available from 10.3.0.10, trying next server
;; Got SERVFAIL reply from 10.3.0.10, trying next server
I am still missing something for the data to return from the DNS server, but can't figure what I need to do.
How do I debug this SERVFAIL issue in Kubernetes DNS?
EDIT:
Things I have noticed and am looking to understand:
nslookup works to resolve Service name in any pod except the openvpn Pod
while nslookup works in those other Pods, ping does not.
similarly traceroute in those other Pods leads to the flannel layer 10.0.2.2 and then stops there.
from this I guess ICMP must be blocked at the flannel layer, and that doesn't help me figure where DNS is blocked.
EDIT2:
I finally figured how to get nslookup to work: I had to push the DNS search domain to the client with
push "dhcp-option DOMAIN-SEARCH cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH default.svc.cluster.local"
add with the -p option in the docker-openvpn image
so i end up with
docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \
-u udp://192.168.10.152:1194 \
-n 10.3.0.10 \
-n 192.168.10.1 \
-n 8.8.8.8 \
-n 75.75.75.75 \
-n 75.75.75.76 \
-s 10.8.0.0/24 \
-d \
-p "route 10.2.0.0 255.255.0.0" \
-p "route 10.3.0.0 255.255.0.0" \
-p "dhcp-option DOMAIN cluster.local" \
-p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \
-p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local"
Now, nslookup works but curl still does not
finally my config looks like this:
docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \
-u udp://192.168.10.152:1194 \
-n 10.3.0.10 \
-n 192.168.10.1 \
-n 8.8.8.8 \
-n 75.75.75.75 \
-n 75.75.75.76 \
-s 10.8.0.0/24 \
-N \
-p "route 10.2.0.0 255.255.0.0" \
-p "route 10.3.0.0 255.255.0.0" \
-p "dhcp-option DOMAIN-SEARCH cluster.local" \
-p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \
-p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local"
-u for the VPN server address and port
-n for all the DNS servers to use
-s to define the VPN subnet (as it defaults to 10.2.0.0 which is used by Kubernetes already)
-d to disable NAT
-p to push options to the client
-N to enable NAT: it seems critical for this setup on Kubernetes
the last part, pushing the search domains to the client, was the key to getting nslookup etc.. to work.
note that curl didn't work at first, but seems to start working after a few seconds. So it does work but it takes a bit of time for curl to be able to resolve.
Try curl -4. Maybe it's resolving to the AAAA even if A is present.

Trouble Creating SSL Certificate

I'm trying to create a self-signed certificate for a test web server running Sun Webserver 6.1 using certutil. I am open to using keytool or openssl if someone has better instructions which work with Sun Webserver.
Here are the commands that I use:
certutil -S -P "https-myWebapp-" -d . -n myCA -s "CN=myWebserver.com CA,OU=myCompany,C=US" -x -t "CT,CT,CT" -m 102 -v 301 -5
and I select option 5 - SSL CA and "yes" to the critical extension question. The CA is created successfully. Now that I have created the certificate authority, I try to sign the actual cert with the following command:
certutil -S -P "https-myWebapp-" -d . -n myServer -s "CN=myWebserver.com,C=US" -c myCA -t "u,u,u" -m 102 -v 300 -5
At the certutil prompt, I select option 1 to create a SSL server with critical extensions enabled. This produces the following error:
certutil: could not obtain certificate from file: You are attempting to import a cert with the same issuer/serial as an existing cert, but that is not the same cert.
What did I do wrong? I think that I may have a failed SSL certificate, but I get the following when running certutil -L -d . -P "https-myWebapp-"
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
myCA CTu,Cu,Cu
In the second command, I needed to change the -m property to a new serial id number.
That fixed the error message and created the certificate.

Resources