Bluetooth HID service connects and disconnect immediately. Says Proxy object disconnected - bluetooth

E/BluetoothHIDService: ------------------------- Connected to 229601
D/BluetoothHidDevice: Proxy object disconnected
D/BluetoothHidDevice: Unbinding service...
D/BluetoothAdapter: onBluetoothServiceDown
E/BluetoothHIDService: ------------------------- HID onServiceDisconnected
D/BluetoothAdapter: onBluetoothServiceDown
D/BluetoothAdapter: onBluetoothServiceUp: android.bluetooth.IBluetooth$Stub$Proxy#c28ebad
D/BluetoothHidDevice: Binding service...
I/BluetoothAdapter: onBluetoothStateChange: up=true
D/BluetoothHidDevice: Proxy object connected
E/BluetoothHIDService: ------------------------- onServiceConnected profile == BluetoothProfile.HID_DEVICE
D/BluetoothHidDevCallback: onAppStatusChanged: pluggedDevice=F8:3B:1D:FF:92:17 registered=true
E/BluetoothHIDService: ------------------------- onAppStatusChanged registered=true
I/BluetoothAdapter: STATE_ON
E/BluetoothHIDService: ------------------------- Connected to null
I/ViewRootImpl#c4c1b24[Client_Activity]: MSG_WINDOW_FOCUS_CHANGED 0 1
D/BluetoothHidDevice: Proxy object disconnected
D/BluetoothHidDevice: Unbinding service...
E/BluetoothHIDService: ------------------------- HID onServiceDisconnected

This problem i fixed by calling service creation in a handler..

Related

Node TLS client connection automatically sends a 'close notify' message (but only in Docker)

I am coding a Node application which requires me to create a TLS client to connect to a remote host. I'm running into an issue where my TLS connection will work properly when the application is run outside of Docker, however when run inside it will send a 'close notify' message immediately before the first message from the TLS server is received.
My TS for a TLS client is as follows:
const socket = tls.connect(
{
host: host,
port: port,
cert: certificate
key: key
},
() => {
logger.info('client connected', socket.authorized ? 'authorized' : 'unauthorized');
// Business logic to notify the user we're connected
process.stdin.pipe(socket);
process.stdin.resume();
}
);
socket.on('data', (data: Buffer) => {
// Processing the data received from the server here
});
socket.on('error', () => {
// Business logic to notify the user the connection had an error. This is not called when the
// connection is closed
});
socket.on('close', hadError => {
// Business logic to notify the user the connection has been closed. hadError is false
// when this callback is called.
});
I added extra logging using the socket.enableTrace() function built into the socket class and other than things like gmt_unix_time, random_bytes, and session_id all values are identical between a bare metal and Docker application. Here are the logs I saw:
TLS 1: client _init handle? true
**I'm only using self-signed certs in the dev environment**
(node:1) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
(Use `node --trace-warnings ...` to show where the warning was created)
TLS 1: client initRead handle? true buffered? false
TLS 1: client _start handle? true connecting? false requestOCSP? false
Client sends handshake
Client receives handshake
Client receives ChangeCipherSpec
Client receives ApplicationData (continuing handshake)
Sent Record
Header:
Version = TLS 1.2 (0x303)
Content Type = ApplicationData (23)
Length = 281
Inner Content Type = Handshake (22)
CertificateVerify, Length=260
Signature Algorithm: rsa_pss_rsae_sha256 (0x0804)
Signature (len=256): <snip>
Sent Record
Header:
Version = TLS 1.2 (0x303)
Content Type = ApplicationData (23)
Length = 69
Inner Content Type = Handshake (22)
Finished, Length=48
verify_data (len=48): <snip>
TLS 1: client onhandshakedone
TLS 1: client _finishInit handle? true alpn false servername false
**Sees the self-signed cert, but I'm ignoring this in my dev environment**
TLS 1: client emit secureConnect. rejectUnauthorized: false, authorizationError: SELF_SIGNED_CERT_IN_CHAIN
Sent Record
Header:
Version = TLS 1.2 (0x303)
Content Type = ApplicationData (23)
Length = 521
Inner Content Type = ApplicationData (23)
** Here is where the client is sending the close notify**
Sent Record
Header:
Version = TLS 1.2 (0x303)
Content Type = ApplicationData (23)
Length = 19
Inner Content Type = Alert (21)
Level=warning(1), description=close notify(0)
Received Record
Header:
Version = TLS 1.2 (0x303)
Content Type = ApplicationData (23)
Length = 453
Inner Content Type = ApplicationData (23)
** Here is where the server is acking the close notify**
Received Record
Header:
Version = TLS 1.2 (0x303)
Content Type = ApplicationData (23)
Length = 19
Inner Content Type = Alert (21)
Level=warning(1), description=close notify(0)
My docker compose file is exposing and publishing the TLS port in use for this connection.
My development server runs Ubuntu 22.04 whereas official Node Docker images run a Debian environment, so I developed my own Dockerfile from scratch however that did not change the behavior. Another thing I noticed was that the version of openssl was different between bare-metal and Docker (v3 vs v1), however making a custom Dockerfile which uses openssl v3 did not change anything either.
I can establish a generic TCP connection using a similar pattern in my code in both a bare metal and Docker environment.

Connect real device to Azure IoT Central using MQTT

I'm fiddling around with Azure IoT central and I configured a device. Now I want to send data using MQTT from a real device (no code).
I can't seemed to find information if this is possible for IoT central.
For IoT hub I found: Azure Iot Hub MQTT
I want to use IoT Central because of the built-in dashboards. Those do not seem to exist for IoT hub.
If I can't send data directly to IoT central, is there a way to use the IoT hub devices in IoT central?
Azure IoT Central uses an IoT Hub in the background, so you can still connect to the public device endpoints using the MQTT protocol on port 8883.
To get the address of the hub you can use the script below on any machine based on the device information in the Azure IoT Central app (see the docs)
// npm install azure-iot-device azure-iot-device-mqtt azure-iot-provisioning-device-mqtt azure-iot-security-symmetric-key --save
"use strict";
// Use the Azure IoT device SDK for devices that connect to Azure IoT Central.
var iotHubTransport = require('azure-iot-device-mqtt').Mqtt;
var Client = require('azure-iot-device').Client;
var Message = require('azure-iot-device').Message;
var ProvisioningTransport = require('azure-iot-provisioning-device-mqtt').Mqtt;
var SymmetricKeySecurityClient = require('azure-iot-security-symmetric-key').SymmetricKeySecurityClient;
var ProvisioningDeviceClient = require('azure-iot-provisioning-device').ProvisioningDeviceClient;
var provisioningHost = 'global.azure-devices-provisioning.net';
var idScope = '{your Scope ID}';
var registrationId = '{your Device ID}';
var symmetricKey = ''{your Primary Key}';
var provisioningSecurityClient = new SymmetricKeySecurityClient(registrationId, symmetricKey);
var provisioningClient = ProvisioningDeviceClient.create(provisioningHost, idScope, new ProvisioningTransport(), provisioningSecurityClient);
provisioningClient.register((err, result) => {
if (err) {
console.log('Error registering device: ' + err);
} else {
console.log('Registration succeeded');
console.log('Assigned hub=' + result.assignedHub);
console.log('DeviceId=' + result.deviceId);
var connectionString = 'HostName=' + result.assignedHub + ';DeviceId=' + result.deviceId + ';SharedAccessKey=' + symmetricKey;
console.log(connectionString);
}
});
Output:
Registration succeeded
Assigned hub=iotc-xxx.azure-devices.net
DeviceId=xxx
HostName=xxx.azure-devices.net;DeviceId=xxx;SharedAccessKey=xxx=
In addition, as stated by Matthijs van der Veer, do note that IoT Central uses the Device Provisioning Service to enable your device to connect to an IoT hub. It assigns an IoT hub to the device when registering but if the device gets reassigned to a different hub, the device will lose connection.

Nodejs Mongodb returns EHOSTUNREACH when requested from within a docker container hosted on kubernetes

I've been struggling with this issue for a while now.
This is probably my second or third request on stackoverflow, so if I forgot to mention something important, please let me know.
First things first, here are some information about the setup:
All Applications are hosted inside a kubernetes cluster
Nginx acts as a loadbalancer(not inside the kubernetes cluster)
The services communicate with each other over dns (e.g. .default.svc.cluster.local)
All services are reachable via dns name and port(There is a kubernetes service resource for each service)
The problem goes like this:
Whenever my Webapplication tries to access the Service from the internet, the service responds with a HTTP 500 internal server error.
When I look into the logs, the error says this:
{ MongoNetworkError: failed to connect to server [172.16.62.2:8635] on first connect [MongoNetworkError: connect EHOSTUNREACH 172.16.62.2:8635]
at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:433:11)
at Pool.emit (events.js:189:13)
at createConnection (/node_modules/mongodb/lib/core/connection/pool.js:571:14)
at connect (/node_modules/mongodb/lib/core/connection/pool.js:1008:9)
at makeConnection (/node_modules/mongodb/lib/core/connection/connect.js:40:11)
at callback (/node_modules/mongodb/lib/core/connection/connect.js:262:5)
at Socket.err (/node_modules/mongodb/lib/core/connection/connect.js:287:7)
at Object.onceWrapper (events.js:277:13)
at Socket.emit (events.js:189:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {} }
{ MongoNetworkError: failed to connect to server [172.16.62.2:8635] on first connect [MongoNetworkError: connect EHOSTUNREACH 172.16.62.2:8635]
at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:433:11)
at Pool.emit (events.js:189:13)
at createConnection (/node_modules/mongodb/lib/core/connection/pool.js:571:14)
at connect (/node_modules/mongodb/lib/core/connection/pool.js:1008:9)
at makeConnection (/node_modules/mongodb/lib/core/connection/connect.js:40:11)
at callback (/node_modules/mongodb/lib/core/connection/connect.js:262:5)
at Socket.err (/node_modules/mongodb/lib/core/connection/connect.js:287:7)
at Object.onceWrapper (events.js:277:13)
at Socket.emit (events.js:189:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {} }
My configuration file looks looks like this:
const MongoClient = require('mongodb').MongoClient;
// Connection URL
const url = 'mongodb://mongo-service.default.svc.cluster.local:27017';
let mongoClient;
/**
* creates mongo client
* #param {string} dbName name of the database you want to connect to
* #return {Promise<unknown>} mongo client
*/
const _getMongoDB = (dbName = 'oli') => {
return new Promise((resolve, reject) => {
if (mongoClient) {
resolve(mongoClient);
} else {
MongoClient.connect(url, {useNewUrlParser: true}).then(client => {
console.log('Connected successfully to server');
mongoClient = client.db(dbName);
resolve(mongoClient);
}).catch(err => {
console.log(err);
reject(err);
});
}
});
};
const mongoDB = {
getClient: _getMongoDB,
};
module.exports = mongoDB;
The weird part is, that when I use the mongo shell from inside the container, everything works fine. I just need to remove the preceding "mongodb://". I've already tried that in the service, without success.
I'd really appreciate any help or hint into a direction. Maybe someone has encountered a similiar problem.
EDIT:
I've discovered, that the logs of the mongo-service are full of errors, except when I connected to it from the shell:
2019-12-03T08:27:24.177+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:27:24.205+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:27:24.600+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:27:25.587+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:27:26.036+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:27:26.441+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:27:26.962+0000 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connected
2019-12-03T08:31:40.862+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54830 #512206 (1 connection now open)
2019-12-03T08:31:40.863+0000 I NETWORK [conn512206] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 172.16.5.1:54830 (connection id: 512206)
2019-12-03T08:31:40.863+0000 I NETWORK [conn512206] end connection 172.16.5.1:54830 (0 connections now open)
2019-12-03T08:31:51.356+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54832 #512207 (1 connection now open)
2019-12-03T08:32:25.055+0000 I NETWORK [conn512207] end connection 172.16.5.1:54832 (0 connections now open)
2019-12-03T08:32:31.034+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54834 #512208 (1 connection now open)
2019-12-03T08:33:33.251+0000 I NETWORK [conn512208] Error receiving request from client: SSLHandshakeFailed: SSL handshake received but server is started without SSL support. Ending connection from 172.16.5.1:54834 (connection id: 512208)
2019-12-03T08:33:33.251+0000 I NETWORK [conn512208] end connection 172.16.5.1:54834 (0 connections now open)
2019-12-03T08:35:18.704+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54844 #512209 (1 connection now open)
2019-12-03T08:35:18.826+0000 E - [conn512209] Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120
2019-12-03T08:35:18.859+0000 I NETWORK [conn512209] DBException handling request, closing client connection: Location34348: cannot translate opcode 2010
2019-12-03T08:35:18.859+0000 I NETWORK [conn512209] end connection 172.16.5.1:54844 (0 connections now open)
2019-12-03T08:35:33.191+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54846 #512210 (1 connection now open)
2019-12-03T08:35:33.192+0000 E - [conn512210] Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120
2019-12-03T08:35:33.192+0000 I NETWORK [conn512210] DBException handling request, closing client connection: Location34348: cannot translate opcode 2010
2019-12-03T08:35:33.192+0000 I NETWORK [conn512210] end connection 172.16.5.1:54846 (0 connections now open)
2019-12-03T08:35:39.370+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54848 #512211 (1 connection now open)
2019-12-03T08:35:39.371+0000 E - [conn512211] Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120
2019-12-03T08:35:39.371+0000 I NETWORK [conn512211] DBException handling request, closing client connection: Location34348: cannot translate opcode 2010
2019-12-03T08:35:39.371+0000 I NETWORK [conn512211] end connection 172.16.5.1:54848 (0 connections now open)
2019-12-03T08:38:01.610+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54850 #512212 (1 connection now open)
2019-12-03T08:38:01.611+0000 E - [conn512212] Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120
2019-12-03T08:38:01.612+0000 I NETWORK [conn512212] DBException handling request, closing client connection: Location34348: cannot translate opcode 2010
2019-12-03T08:38:01.612+0000 I NETWORK [conn512212] end connection 172.16.5.1:54850 (0 connections now open)
2019-12-03T08:38:15.269+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54852 #512213 (1 connection now open)
2019-12-03T08:38:15.270+0000 E - [conn512213] Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120
2019-12-03T08:38:15.270+0000 I NETWORK [conn512213] DBException handling request, closing client connection: Location34348: cannot translate opcode 2010
2019-12-03T08:38:15.270+0000 I NETWORK [conn512213] end connection 172.16.5.1:54852 (0 connections now open)
2019-12-03T08:41:17.804+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54856 #512214 (1 connection now open)
2019-12-03T08:41:17.804+0000 I NETWORK [conn512214] received client metadata from 172.16.5.1:54856 conn512214: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.1" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 9 (stretch)"", architecture: "x86_64", version: "Kernel 4.9.0-8-amd64" } }
2019-12-03T08:41:21.328+0000 I NETWORK [conn512214] end connection 172.16.5.1:54856 (0 connections now open)
2019-12-03T08:42:02.199+0000 I NETWORK [listener] connection accepted from 172.16.5.1:54858 #512215 (1 connection now open)
2019-12-03T08:42:02.199+0000 I NETWORK [conn512215] received client metadata from 172.16.5.1:54858 conn512215: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.1" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 9 (stretch)"", architecture: "x86_64", version: "Kernel 4.9.0-8-amd64" } }
2019-12-03T08:42:15.324+0000 I NETWORK [conn512215] end connection 172.16.5.1:54858 (0 connections now open)
Apparently this can happen, when the pod is behind a loadbalancer. After changing that, the error log stays clean. However, this doesn't solve the original problem.
It turns out, that the problem wasn't with connectivity or anything.
While debugging, I created a connectivity test script and executed it via node -e. Connection successful. So it had to be something different...
Since I'm not developing the microservice (just responsible for the infrastucture part), I didn't know that there were more than one connection strings in different files. (First time searching through the files didn't give me that knowledge).
The older connection string was pointing to a long gone mongodb. Everything works fine now.
Thanks #Oles Rid for helping me narrowing down the problem.

TcpIp communication from an Azure Function?

I have an azure Queue trigger function that has this code:
using (var client = new TcpClient(AddressFamily.InterNetworkV6))
{
client.Client.DualMode = true;
client.Connect(endpoint);
var data = Encoding.ASCII.GetBytes("test");
using (var outStream = client.GetStream())
{
outStream.Write(data, 0, data.Length);
}
}
The error I am getting back:
A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond
The endpoint address looks correct and this code works when I debug locally, so I suspect that the azure server might not be allowing the outbound connection.
Any ideas why this connection is not working?
Update: This is still not working and I have tried generating the client in the following ways:
// DualMode IPV6
var client = new TcpClient(AddressFamily.InterNetworkV6);
client.Client.DualMode = true;
client.Connect(endpoint);
// SingleMode Internetwork
var client = new TcpClient(AddressFamily.InterNetwork);
client.Connect(endpoint);
// Just Endpoint
var client = new TcpClient(endpoint);
client.Connect(endpoint);
// Normal
var client = new TcpClient(hostAddress, port);
// Forced IPV6
var client = new TcpClient("::ffff:" + hostAddress, port);
Debugging locally, all of these methods except for "forced IPV6" work just fine. On the server, I get these errors:
== DualMode IPV6
Failed PingBack: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because connected host
has failed to respond [::ffff:204.16.184.62]:3164
== SingleMode Internetwork
Failed PingBack: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because connected host
has failed to respond 204.16.184.62:3164
== Just Endpoint
Failed PingBack: The requested address is not valid in its context
== Normal
Failed PingBack: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because connected host
has failed to respond 204.16.184.62:3164
== Forced IPV6
Failed PingBack: The requested address is not valid in its context [::ffff:204.16.184.62]:3164
Looking at your TcpClient instance,
var client = new TcpClient(AddressFamily.InterNetworkV6)
there's no IPv6 in Azure Functions yet. Switch your AddressFamily to v4:
var client = new TcpClient(AddressFamily.InterNetwork)
There are no restrictions on outbound desinations in App Service/Functions.

Bluelist app throws "Enroll failed to create remote cloudant database for Optional("test1"). timeout

I have a Bluelist app based on this bluemix sample app. I added some more types of data into the DB and also changed the Node.Js app, so that there is only one "todosdb" created for different users.
In the last few days it threw the following errors a few times. When I deleted the remote DB, it did run again. But it just threw the same error again. I deleted the DB, did not fix it this time. Still getting the same error. Also tried the sample app itself, it throws the same error. Can someone tell me how to debug it?
2015-11-13 20:06:33.303 bluelist-swift[57121:1075334] [DEBUG] [IMF] -[IMFAuthorizationRequest requestFinished:] in IMFAuthorizationRequest.m:341 :: Response Header: {
Connection = "Keep-Alive";
"Content-Type" = "application/json;charset=UTF-8";
Date = "Fri, 13 Nov 2015 20:06:33 GMT";
"Transfer-Encoding" = Identity;
"X-Backside-Transport" = "OK OK";
"X-Cf-Requestid" = "b5bf38ed-a48b-4e69-4852-bb1f4c81011a";
"X-Client-IP" = "80.111.218.187";
"X-Global-Transaction-ID" = 1902424573;
"X-Powered-By" = "Servlet/3.0";
}
Response Data: {"token_type":"bearer","expires_in":3600,"id_token":"eyJhbGciOiJSUzI1NiIsImpwayI6eyJhbGciOiJSU0EiLCJleHAiOiJBUUFCIiwibW9kIjoiQxxxxxxxxxxxxx}
Status code=200
2015-11-13 20:06:33.320 bluelist-swift[57121:1075334] [INFO] [BlueList] Authenticated user with id test1
2015-11-13 20:06:33.325 bluelist-swift[57121:1075334] [DEBUG] [IMF_OAUTH] -[IMFAuthorizationManager releaseCompletionHandlerQueue:error:] in IMFAuthorizationManager.m:428 :: Completion handlers queue released.
2015-11-13 20:06:33.326 bluelist-swift[57121:1075334] [DEBUG] [IMF_OAUTH] -[IMFAuthorizationManager clearCompletionHandlerQueue] in IMFAuthorizationManager.m:437 :: Completion handler queue cleared
2015-11-13 20:07:33.734 bluelist-swift[57121:1075334] [ERROR] [BlueList] Enroll failed to create remote cloudant database for Optional("test1"). Error: Optional(Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={NSUnderlyingError=0x7fdef2e18dc0 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, NSErrorFailingURLStringKey=https://chatbms.mybluemix.net/bluelist/enroll, NSErrorFailingURLKey=https://chatbms.mybluemix.net/bluelist/enroll, _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-2102, NSLocalizedDescription=The request timed out.})
Bluemix was doing some maintenance, I think. It worked later.

Resources