How to check if a ChromeCast Session is already in progress - google-cast

The use case is that a user starts playback from their iPhone, lets say, and then picks up their iPad (both running my app) and wants to contect to and control the running video from this other iOS device.
On iOS, I do not see any way to determine if there is a instance of my receiver app already running on the Google ChromeCast device. Once I create my session it seems the only thing I can do is to attach a new protocol message stream, which interrupts whatever might be playing already.
It this suppose to be handled in the iOS client side Framework, perhaps there is some coding I need to do in the HTML receiver app?
Thanks.

There is a way outside the API to determine if an app is running. Do a HTTP GET on the apps URL for the ChromeCast IP address: http://192.168.0.x:8008/apps/
If the HTTP response is 200, nothing is running. If the HTTP response is 204, then an app is running and the HTTP response would be redirected to a URL like: http://192.168.0.x:8008/apps/GoogleMusic
Which tells you which app is running.
Interestingly, Google Play Music cannot be controlled by 2 devices simultaneously, but YouTube can. I suspect Play Music is using RAMP which is what the Cast SDK does for media streams. YouTube could be using a proprietary message stream to control the media playback. So you might have to do the same if you want to have the an app on a device controlled by multiple sender apps.

One method is to check the playStatus after you start your session and before you initiate a loadMedia(). If your app is already running - it should return a non-nil (ie. IDLE, PLAYING, ...) result.

Related

Getting access to Mesibo video and audio stream from outside a browser (i.e on a server)

I would like to process audio and video from a Mesibo conference on the server side and then, if possible, feed the processed stream back in as a new publisher (participant) in a different group (conference).
By current best guess would be something like this...
Run the Mesibo Javascript API in a virtual browser using node browser-run and Xvfb
Connect to the conference in the browser and somehow extract the necessary WebRTC connection details and feed this back to the node process controlling the virtual browser
Connect to the conference using node webrtc-client
Having to run a virtual browser every time seems like overkill. Also I have no idea where I would get the webrtc connection details (step 2) from in the virtual browser. Does the Mesibo Javascript API expose these anywhere?
Assumedly if I could get the above working then I could use the same webrtc-client instance to feed the process back into the conference, but if I wanted to feed it into a different conference then I'd have to create another virtual browser.
Anybody got any ideas?
mesibo on-premise conference server exposes RTP API, possibly that should help. However, the on-premise conference server will be available publicly in Feb'21 so you will have to wait.
How would you expect step 2? are you looking to access the underlying peerconnection?

How would you push a 'message' to a device using Node.js?

Now here's a really weird question that I couldn't find the answer to on the internet. Here's how I'm planning to build a project:
Controller App --> Node.js Server (probably Express) --> Some IoT Device Running Node.js Who Knows Where
So essentially, the Controller App wants to control an IoT device, but it could be anywhere. So, it communicates to a server which sits on a static IP which will keep track of where this IoT device is (could be on any network/IP/port). So the controller app will send a request to the server, and the server will tell this IoT device wherever it is to do something.
The problem is, how will this Node.js Server know where the device is?
Proposed Solution A: One way I thought of was to have a server, and share a secret string between the server and the IoT device. The server will have some 'endpoint(?)' that the IoT device can 'subscribe' to.
Proposed Solution B: The IoT device forms a WebSocket or a Sockets.io connection. Whilst this might be a better and easier solution, when you add many devices, will the server take up much more resources when it's communicating to multiple devices in real time?
So yeah, a really weird question, because here, it's really a push notification from Node.js -> Node.js, rather than what every other search result is about, for Node.js -> Some Notification Service like iOS or Google or Web Service Workers.
Thanks!
The "push" options are generally as follows:
Client polls an endpoint every once in a while to check if there's something new. Not really push, but very simple to implement. Feasability for using this implementation depends upon how "real-time" you need the push to be.
Client creates and maintains a constant connection with the server and the server can then send data over that connection at any time. This would be the webSocket or socket.io option or, in some cases SSE (server sent events) which is a version of continuous http. The client will need the ability to detect when the connection has dropped and re-establish the connection as needed. Obviously, the server needs the ability to handle a simultaneous connection (but mostly idle connection) from every device you're supporting. If the traffic is low, custom server configurations can support hundreds of thousands of connections. Typical shared hosting solutions are much more limited in this regard as they don't give you access to the whole server's resources.
Server uses some existing "push service" that is built into the client. This would work for an iOS or Android device that has a push service as part of the platform. Not available to a custom IoT device.
Third party push services or libraries. Google has Firebase Cloud Messaging which purports to be usable with IoT devices, but I'm mostly just finding examples of the IoT device initiating the event and having that event then pushed to more classic devices (phones, browsers, etc...), not from node.js server to IoT device.

Ways to broadcast audio from WebAudio API to server-side and then to connected clients

I am developing a colaborative instrument playing game, where multiple users will play an instrument (a synthesizer or sample, using the WebAudio API). On my first prototype I've set up a keyboard that sends note/volume signals via Socket.io to the server, and when the server gets that signal it sends it back to all connected sockets, which will play the corresponding note.
You might have guessed it right: there's a massive amount of lag and inconsistency as to the order of arrival of notes.
What are some efficient ways that I can send the output of WebAudio to the server, and have it broadcast to all connected users, so I have some sort of consistency?
You could try using a MediaStream by adding an MediaStreamAudioDestinationNode to your audio node graph as a destination and use that stream with either WebRTC or RecordRTC to send to your server.
Here is some info I found you could look at.
It does talk about using the getUserMedia method, but both getUserMedia and MediaStreamAudioDestinationNode methods send out a MediaStream constructor. This info
has some ideas on how you could send a MediaStream to your sever. However it does say that it needs to be recorded first. Not when it's live and running.
Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia
I hope this helps :)

How to capture media events (play/pause/next etc.) in a Custom Receiver?

Google Chromecast supports external control, such as play, pause, next, previous using both the Google Home app and an Infrared remote (over HDMI CEC).
How can these events be captured in a custom media receiver (using the CAF Receiver API) when the receiver has no media playing?
When no media is playing, the receiver is in IDLE state - that means that a sender is connected and the receiver app is loaded and running, but there is currently no playback, paused playback or buffering operation ongoing.
The messages that can now be intercepted/observed by the receiver are basically the same regardless if they have been issued by a sender app, Google home/assistant or CEC - and you can process them the same way.
If you want to implement different behavior depending on the device that send the message (or track that maybe), you can have a look at the customData section - you can set up your sender app to include some data into that, but you have no influence on how a Google Home / Google Assistant or CEC issued messages look like: CustomData will be empty here.

Controlling multiple Chromecast Receivers from one Sender?

Is it possible to write a Chrome extension (or Android app) that creates multiple Senders, each connecting to a different Receiver?
In other words, I need to build an interface from which an operator can control the streams on multiple different Chromecasts in the vicinity - each will be playing a different video stream.
I understand from other posts that the chrome.cast API does not allow for this - that the Chrome extension may acts as a single Sender only? This restriction seems arbitrary - I read somewhere that someone was able to control two devices by running two different versions of Chrome, so if this restriction exists in the Chrome API, it's not due to any limitation of the underlying protocol, correct? (what then, politics?)
Is there a lower-level API (perhaps on Android?) that would permit you to create multiple Senders and connect them to different Receivers?
I've seen some apps (such as Videostream) which appear to continue to run on the Receiver after you've closed the Sender. Might it be possible to, for example, launch a Receiver app on multiple devices, one at a time, have them identify themselves and connect to a local webserver, e.g. via WebSockets, and then have my webserver send messages to those Receiver apps to ask them to change videostreams?
As a last resort, is there an open specification of the underlying protocol?
There is nothing to stop you from writing a sender app that connects to a chromecast, launches an app and then disconnects from that device while letting the chromecast continue running the app; you would need to make sure that you do not stop the receiver when it detects that there are no connected devices. Then, on the sender side, you can repeat the same process but this time connect to a second device and so on. The important thing to keep in mind is that your sender device cannot hold multiple concurrent connections to multiple devices (MediaRouter is a global instance); this means you cannot receive messages (status updates, etc) from different Cast devices except the one you are directly connected to at that time. Also, there is nothing to stop a different user to connect to one of these devices and launch a different app.
To answer your other question, the underlying protocol is not open.

Resources