get the lamp status of Onvif camera - onvif

I have an onvif camera and a toggle button to turn the lamp on and off. I use the tt:lamp|on style of commands sent to the Onvif endpoint. Is there a way to query the status of the camera and retrieve weather or not the lamp is on?

You should subscribe to the camera side events via the ONVIF event service and check if the camera is sending the current lamp status in the event stream. Most devices will give you a full initial state of all the things they can report through the event service as a response to the first PullMessage request.

Related

Getting recorded events from an IP camera that doen't support ExportRecordedData

I've been researching this for 2 days but I couldn't find any solutions that are camera provider agnostic (I want to implement this only using onvif), so let's say I have an IP camera that detects motion and captures recordings based on the detection, I was able to use PullMessages method to get notifications for onvif events, but problem is the camera doesn't support ExportRecordedData to allow me to export these recordings, so does anyone know of a way to get access to an IP camera's sd card content using onvif?
Thanks!
Some cameras can stream recorded video. You could try GetRecordings for get RecordingToken's, and then call GetReplayUri with some token. In reply you'll get link to record's stream.

How to capture media events (play/pause/next etc.) in a Custom Receiver?

Google Chromecast supports external control, such as play, pause, next, previous using both the Google Home app and an Infrared remote (over HDMI CEC).
How can these events be captured in a custom media receiver (using the CAF Receiver API) when the receiver has no media playing?
When no media is playing, the receiver is in IDLE state - that means that a sender is connected and the receiver app is loaded and running, but there is currently no playback, paused playback or buffering operation ongoing.
The messages that can now be intercepted/observed by the receiver are basically the same regardless if they have been issued by a sender app, Google home/assistant or CEC - and you can process them the same way.
If you want to implement different behavior depending on the device that send the message (or track that maybe), you can have a look at the customData section - you can set up your sender app to include some data into that, but you have no influence on how a Google Home / Google Assistant or CEC issued messages look like: CustomData will be empty here.

How come it is always the GATT server that exposes services?

Looking at various GATT-based profiles, it seems that services are always exposed in the GATT server rather than the GATT client. For instance, the Time Profile (TIP) has the server exposing the Current Time Service (CTS). So, if a phone is to update a heart rate monitor with the current time using TIP, the phone will be the server whereas the monitor will be the client. But, being a heart rate monitor, the Heart Rate Profile expects the monitor to be a GATT server.
So, for a monitor that takes the current time from a phone, should it be a GATT client or server? Should it be set as a client whilst time syncing with the phone and set as a server otherwise? Should a custom profile be implemented such that the CTS is exposed in the client instead?
Thanks
Generic Attribute Profile (GATT) defines how server and client communicate with
each other using Attribute Protocol for the purpose of transporting data. Client
and server roles are determined when a procedure is initiated and released when the procedure is ended. Hence, a device can act in both roles at the same time.
I would suggest you to read Bluetooth Spec. In Part G 2.2 it explains the roles and configurations.
Client—This is the device that initiates commands and requests towards the
server and can receive responses, indications and notifications sent by the
server.
Server—This is the device that accepts incoming commands and requests
from the client and sends responses, indications and notifications to a client.
Back to your question:
The Time profile enables the device to get the date, time, time zone,
and DST information and control the functions related the time.
In your case, the monitor will be the GATT client when it takes the time from a phone. However, it can be a server at the same time for another procedure (operation, request etc.) with the phone.
In short, client and server roles are not fixed to the devices. When your phone exposes the current time, it will be server. Similarly, when it gets the current time from the monitor, it will be client. no need to customize the profile. If you want your phone to get the current time from a device and expose it to another device, just implement the same profile for client and server roles to your phone.
EDIT:
According to TIP profile spec, to get the current time information, the GATT Read Characteristic Value sub-procedure shall be used with the handle of the Current Time Characteristic. Monitor as a client will read the Current Time Characteristic from the GATT Table of the server (in this case it is the phone). As soon as the monitor retrieves the value from phone, it can update its Current Time Characteristic Value, and expose it to its environment in three ways:
Notifying it to its subscribed clients (BLE notifications). If you do it in this way, you will customize the Bluetooth TIP profile since this procedure is not defined there (I had a quick look to the document and didn't see it).
Broadcasting it in the advertisement packet (Doesn't require BLE connection)
Another BLE device connects to the monitor and reads the Current Time Characteristic value. This is the recommended way if you want to use Bluetooth SIG defined TIP profile as a server.

Node.js and socket.io; synchronizing button state

I have found some node.js code for my Intel Edison that has the basis of what I need the issue is that it's using socket.io buttons on the html front end to control GIPO pins on the board but it does not read the current state of the GPIO pin when loading the page or if the page is loaded on another device they are not synchronized. I was thinking that booleans could be set on the node.js code that would hold the status of a GPIO and the webpage would constantly check the status of it and set the button state accordingly? I tried some stuff my self but as a beginner I was out of luck. The code in question is on gethub https://github.com/drejkim/LediMote
Given the socket.io-based app structure you already have, it looks like you should just read the initial GPIO state and send it to the web page with a socket.io message as soon as you get the connection event for the incoming socket.io connection in /lib/routes/socket.js.
Just create a message name and send that message with the initial state upon connection. Then in the client, you just listen for that message and update the state in the web page with whatever is sent as the current state. You can use that same message to update any connected clients any time the state is changed by some other client.

How to check if a ChromeCast Session is already in progress

The use case is that a user starts playback from their iPhone, lets say, and then picks up their iPad (both running my app) and wants to contect to and control the running video from this other iOS device.
On iOS, I do not see any way to determine if there is a instance of my receiver app already running on the Google ChromeCast device. Once I create my session it seems the only thing I can do is to attach a new protocol message stream, which interrupts whatever might be playing already.
It this suppose to be handled in the iOS client side Framework, perhaps there is some coding I need to do in the HTML receiver app?
Thanks.
There is a way outside the API to determine if an app is running. Do a HTTP GET on the apps URL for the ChromeCast IP address: http://192.168.0.x:8008/apps/
If the HTTP response is 200, nothing is running. If the HTTP response is 204, then an app is running and the HTTP response would be redirected to a URL like: http://192.168.0.x:8008/apps/GoogleMusic
Which tells you which app is running.
Interestingly, Google Play Music cannot be controlled by 2 devices simultaneously, but YouTube can. I suspect Play Music is using RAMP which is what the Cast SDK does for media streams. YouTube could be using a proprietary message stream to control the media playback. So you might have to do the same if you want to have the an app on a device controlled by multiple sender apps.
One method is to check the playStatus after you start your session and before you initiate a loadMedia(). If your app is already running - it should return a non-nil (ie. IDLE, PLAYING, ...) result.

Resources