Is it possible to write a Chrome extension (or Android app) that creates multiple Senders, each connecting to a different Receiver?
In other words, I need to build an interface from which an operator can control the streams on multiple different Chromecasts in the vicinity - each will be playing a different video stream.
I understand from other posts that the chrome.cast API does not allow for this - that the Chrome extension may acts as a single Sender only? This restriction seems arbitrary - I read somewhere that someone was able to control two devices by running two different versions of Chrome, so if this restriction exists in the Chrome API, it's not due to any limitation of the underlying protocol, correct? (what then, politics?)
Is there a lower-level API (perhaps on Android?) that would permit you to create multiple Senders and connect them to different Receivers?
I've seen some apps (such as Videostream) which appear to continue to run on the Receiver after you've closed the Sender. Might it be possible to, for example, launch a Receiver app on multiple devices, one at a time, have them identify themselves and connect to a local webserver, e.g. via WebSockets, and then have my webserver send messages to those Receiver apps to ask them to change videostreams?
As a last resort, is there an open specification of the underlying protocol?
There is nothing to stop you from writing a sender app that connects to a chromecast, launches an app and then disconnects from that device while letting the chromecast continue running the app; you would need to make sure that you do not stop the receiver when it detects that there are no connected devices. Then, on the sender side, you can repeat the same process but this time connect to a second device and so on. The important thing to keep in mind is that your sender device cannot hold multiple concurrent connections to multiple devices (MediaRouter is a global instance); this means you cannot receive messages (status updates, etc) from different Cast devices except the one you are directly connected to at that time. Also, there is nothing to stop a different user to connect to one of these devices and launch a different app.
To answer your other question, the underlying protocol is not open.
Related
first of all: What i am trying to do is only for private interest.
I'd like to connect a AT-09/HM-10 BLE-Module with Firmware 6.01 to another device which provides also a BLE Module, which it is not based on the CC254X-Chip,
I am able to communicate with this Device using my Laptop with integrated Bluetooth, Linux and the bluepy-helper. I am also able to make a connection using the HM10 through a USB-RS232-Module and "Hterm", but after that quite Stuck in my progress.
By "reverse-engineering" the Android-Application for controlling this particular device i found a set of Commands, stored as Strings in Hex-Format. The Java-Application itself sends out the particular Command combined with a CRC16-Modbus-Value in addition with a Request (whatever it is), to a particular Service and Characteristic UUID.
I also have a Wireshark-Protocol pulled from my Android-Phone while the application was connected to the particular device, but i am unable to find the commands extracted from the .apk in this protocol.
This is where i get stuck. After making a connection and sending out the Command+CRC16-Value i get no response at all, so i am thinking that my intentions are wrong. I am also not quite sure how the HM-10-Firmware handles / maps the Service and Char-UUIDs from the destination device.
Are there probably any special AT-Commands which would fit my need?
I am absolutely not into the technical depths of Bluetooth and its communication layer at all. The only thing i know is that the HM-10 connects to a selected BLE-Device and after that it provides a Serial I/O and data flows between the endpoints.
I have no clue how and if it can handle Data flow to certain Service/Char UUIDs from the destination endpoint, althrough it seems to have built-in the GATT , l2cap-Services and so on. Surely it handles all the neccessary communication by itself, but i don´t know where i get access to the "front-end" at all.
Best regards !
I am using react-native-webrtc to handle the WebRTC portion of this.
I am using Websockets to signal and using ICE trickling to keep track of the ICE candidates.
I queue my ICE candidates until setLocalDescription has been called on the callee side. Then I addIceCandidate for each candidate in the queue.
On the caller side I am doing the same thing and not processing my ICE candidates until setRemoteDescription has been called.
I am only doing audio so no video being used.
When I test this with two mobile devices on the same network I have no issues.
But if I disconnect one device from the WiFi the calls still connect just fine except the audio cannot be heard on either device.
The onConnectionStateChange handler will still return "connected" and the onIceGatheringStateChanged will still return "complete".
I thought maybe I needed to use a TURN server to get this working so I started using Twilio's paid TURN/STUN server but the issue is still persisting.
Any ideas what to look into?
BACKGROUND
Ok, so you have to take some background on P2P connection on RTC platforms. And so, it begins (in very short version):
In order to establish connection you have to establish direct connection between two clients (how obviously, I know). In order to find this routes you need help on network servers.
And that's why you setup local SDP with setting, to which server we can access. ICE, TURN, STUN (you can find any information, for ex. this one). Now ICE candidates most obvious one, because this server endpoints within your local network and that's why your version is not working with different network.
Right, you have to use TURN/STUN to find NAT and correct routes between peers. Most TURN server are private and paid, but for less loaded application you might use public STUN servers, that would be more then enough.
You can find many available over there. One ex. is here.
stun.l.google.com:19302
stun1.l.google.com:19302
stun2.l.google.com:19302
SOLUTION
Now coming to your problem. If you think you have connected your devices with your signaling it doesn't mean you connected devices. (It's just to clarify, if you don't have media on your devices your RTC connection failed to establish, and it's not just audio).
The problem in using it's TURN/STUN servers on your devices, and you have to trace SDP which established during setRemoteDescription and check the servers were included. Furthermore there is always a Google demo which is working perfectly.
UPDATE
In order to trace how remote SDP will be set and connection establish oyu have to print candidates which will be used to setup. To do that, you have to print information which candidates gathered during setLocalDescription and setRemoteDescription.
In place where you are gathering candidates add logging to print information. You have to see, that STUN, TURN candidates will be there. Below ex in Java. Word ICE shouldn't bother you, because it's just means that candidates AFTER ICE traversing will be found.
// Listen for local ICE candidates on the local RTCPeerConnection
peerConnection.addEventListener('icecandidate', event => {
if (event.candidate) {
// Here should be your part where you are sending this candidate to your signaling channel
// Add logging to print entire candidate information. You should see some data related to ICE, TURN.
}
});
I'm coding a project which needs cloud control device operation, and want to keep information in sync.
The cloud needs to know the state of device, such as when the network is interrupted and when the network is restored.
When the network is restored, the modified information on the cloud is synchronized to device.
anyone got an idea of how my approach should be like? any tips?
I intend to add resident programs in the background at both ends to determine, but in fact, it is impossible for the cloud in the project to connect only one device, and multiple apps may run in one device, which is very tedious to do. Is there any simple component to realize this function?
I wish control information and data information to be synchronized on the cloud and device
Based on your tag, I'm assuming that you are using MQTT as a messaging protocol for your system. If so, to address your need for tracking the device-cloud connection state, MQTT specifies a feature called "Last Will and Testament".
From the MQTT 3.1.1 Standard Section 3.1.2.5:
If the Will Flag is set to 1 this indicates that, if the Connect request is accepted, a Will Message MUST be stored on the Server and associated with the Network Connection. The Will Message MUST be published when the Network Connection is subsequently closed unless the Will Message has been deleted by the Server on receipt of a DISCONNECT Packet [MQTT-3.1.2-8].
This can be leveraged to let the remote MQTT client on the cloud know when the device is connected and when it disconnects by publishing an online payload to a topic (for example) device/conn_status after a successful connection, and registering a Last Will offline message to the same topic. Now, whenever the device client goes offline, the broker will publish the offline payload on his behalf to the cloud client that can now act accordingly.
Looking at various GATT-based profiles, it seems that services are always exposed in the GATT server rather than the GATT client. For instance, the Time Profile (TIP) has the server exposing the Current Time Service (CTS). So, if a phone is to update a heart rate monitor with the current time using TIP, the phone will be the server whereas the monitor will be the client. But, being a heart rate monitor, the Heart Rate Profile expects the monitor to be a GATT server.
So, for a monitor that takes the current time from a phone, should it be a GATT client or server? Should it be set as a client whilst time syncing with the phone and set as a server otherwise? Should a custom profile be implemented such that the CTS is exposed in the client instead?
Thanks
Generic Attribute Profile (GATT) defines how server and client communicate with
each other using Attribute Protocol for the purpose of transporting data. Client
and server roles are determined when a procedure is initiated and released when the procedure is ended. Hence, a device can act in both roles at the same time.
I would suggest you to read Bluetooth Spec. In Part G 2.2 it explains the roles and configurations.
Client—This is the device that initiates commands and requests towards the
server and can receive responses, indications and notifications sent by the
server.
Server—This is the device that accepts incoming commands and requests
from the client and sends responses, indications and notifications to a client.
Back to your question:
The Time profile enables the device to get the date, time, time zone,
and DST information and control the functions related the time.
In your case, the monitor will be the GATT client when it takes the time from a phone. However, it can be a server at the same time for another procedure (operation, request etc.) with the phone.
In short, client and server roles are not fixed to the devices. When your phone exposes the current time, it will be server. Similarly, when it gets the current time from the monitor, it will be client. no need to customize the profile. If you want your phone to get the current time from a device and expose it to another device, just implement the same profile for client and server roles to your phone.
EDIT:
According to TIP profile spec, to get the current time information, the GATT Read Characteristic Value sub-procedure shall be used with the handle of the Current Time Characteristic. Monitor as a client will read the Current Time Characteristic from the GATT Table of the server (in this case it is the phone). As soon as the monitor retrieves the value from phone, it can update its Current Time Characteristic Value, and expose it to its environment in three ways:
Notifying it to its subscribed clients (BLE notifications). If you do it in this way, you will customize the Bluetooth TIP profile since this procedure is not defined there (I had a quick look to the document and didn't see it).
Broadcasting it in the advertisement packet (Doesn't require BLE connection)
Another BLE device connects to the monitor and reads the Current Time Characteristic value. This is the recommended way if you want to use Bluetooth SIG defined TIP profile as a server.
I am currently attempting to connect to multiple BLE devices using BlueZ 5.0 and Linux. I have one host BLE adapter and I have modified the gatttool to connect and perform this function. If I run an instance of the modified gatttool, I successfully connect and receive notification data from the BLE device. If I run another instance of the modified gatttool and connect to another BLE device, this application starts receiving notification data from both BLE devices and the initial application no longer receives any data. I believe this is due to the socket setup, where both applications are configuring their sockets to the same address and PSM (the newest instance receives the data whereas the other is starved). Is there a way to prevent this condition? Ideally, I want one application to connect to multiple devices. I assume that the application can only have one socket for the reason that multiple sockets will have the same issue as the multiple instances above. My BLE device is a TI CC2540 keyfob acting as a heartrate monitor.
I started an answer so I could have more space...
I'm using a combination of Python and C to get my code to work, so my "code" may look funny because it could be from either. Also, I used Bluez 4 as the 5 didn't support the kernel I was using. Let me know if there's an issue and I can clarify.
It seems like there's several ways of doing things, but I ended up opening separate sockets for different tasks. You can open a single socket and then set the socket options to take filtering off and you should get all the packets in one place. However, that was my initial way of doing it and I found that my connections would die within seconds.
To scan for connections I opened a socket(AF_BLUETOOTH, SOCK_RAW, BTPROTO_HCI) then did a bind on device 0. (there's a function called hci_get_route to get an available device number) You can then call hci_le_set_scan_parameters to set options, setsockopt(SOL_HCI, HCI_FILTER, filter) to just get LE scan events, and then called hci_le_set_scan_enable to turn on scanning.
Each device connection was made with a socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP) which you then tell to connect to a particular device by calling connect on the socket with a struct sockaddr_l2 that has the particular device address in it. On that socket you should only get packets from that device. (one caveat... I found that my dongle wouldn't allow a connection while active scanning was taking place.. I had to temporarily shut it off just before connecting and then turn it back on. Otherwise I got a BUSY error from errno)
After saying all that, though... I think the way you're supposed to do everything in Bluez 5 is to use DBUS. Unfortunately that wasn't really an option for what I was doing. The functions I mentioned are in the shared lib that apparently isn't installed by default in 5 (you have to explicitly ask for it to be installed with configure). They stopped installing the shared lib by default because they wanted to encourage people to use DBUS instead.
WE have combined the code from hcitool and gatttool. The code works well for 2 device (scan, hci_le_create_conn and gatt_connect). I believe there is no limitation on the number of devices used.
1 Start cmd_lescan (from hcitool.c)
2.For each device scanned -
cmd_lecc (from hcitool.c)
gatt_connect (from gatttool.c)
This way one process can manage multiple BLE device. We do not have to turn OFF the scanning, just have ignore non advertisement messages:
if (meta->subevent != 0x02)
continue;
Thanks and looking forward to comments.