Get data from Hue in realtime - philips-hue

I'm checking the hue API and I'm wondering if I understand it correctly that the motion sensor is able to switch lights on directly via the "rules", but it is impossible to get notified via the bridge about changes?
My scenario is that I would like to detect, if there is any motion and if not to turn my TV via its rest API off.
I also read that the sensor data are just updates every 5 minutes, how can I decrease the scan interval?

If you poll the Hue bridge via API, you instantly get the motion result.
It is as simple as that:
GET http://<bridgeip>/api/<userid>/sensors/<sensor-id>
{
"state": {
"presence": true,
"lastupdated": "2018-11-01T13:43:00"
},
...
}
For checking all 1 or 2 minutes, this will work fine (while my personal way of watching TV would not assure that the motion sensor detects my presence, as the chips are too fast gone).
However, this polling is the only way to retrieve events from Hue. If there is any need to use events instantly like e.g. the Hue dimmer switches etc. for external sources: Forget it. There is no syslog, no IFITT to the outside, no HTTP triggers, or anything you could use apart from polling. Philips answers these questions in the forum in a way:
We know the demand, it is on our roadmap, we do not commit to a date
Therefore: Buying Philips Hue sensors and switches is something which binds you to the ecosystem of the Hue Bridge.

You can achieve something like it by using Apple HomeKit-automations sending shell commands to your server – even though it is somewhat of a workaround...

Related

How to check in an efficient way if a Philips HUE motion sensor detects motion?

I am coding a little app that is checking whenever a Philips HUE motion sensor detects presence in a quite reactive way.
At the moment I am polling its presence status every 2 seconds using the HUE V1 API: get-sensor
I am wondering if there is a better/efficient way to do it since this application is running on a little controller, so I would like to avoid to stress it with continuous polling and I also wondering if doing that will also drain out the sensor battery in a few days.

How can I briefly display a graphic on the screen without a dialog in AppleScript?

I have 2 applescripts (saved as apps) that make webhook calls in a loop to control the volume of my stereo. Each script displays a dialog that asks for a number of ticks to tick the volume up or down, and it loops to make the webhook call each time.
Background: I wrote a program called pi_bose that runs on my raspberry pi to send commands to my Bose Series 12 stereo. It sends codes on the 28Mhz band using a wire as an antenna plugged into one of the GPIO ports. Node red receives the webhook calls and runs that script. But there are various things that can make it fail. The antenna can be loose because the pi has been bumped. Node red isn't running. The program has a small memory leak that causes a problem after having been used for about 6 months. And sometimes there's background interference that makes not every transmission work (I could probably use a longer antenna to address that I guess). But sometimes, whatever is playing on the stereo is just so soft that it's hard to detect the subtle change to the volume. And sometimes, it seems that either the webhook call happens slowly and the volume is changing - it just happens over the course of 20-30 seconds. So...
I know I could do the loop on the pi itself instead of repeating the webhook call, but I would like to see progress on the mac itself.
I'd like some sort of cue that gives me some feedback to let me know each time the webhook call happens. Like, a red dot on the AppleScript app icon or something in the corner of the screen that appears for a fraction of a second each time the webhook call is made.
Alternatively, I could make the script make some sort of sound, but I would rather not disrupt audibly whatever is playing at the time.
Does anyone know how to do that? Is it even possible to display an icon without a dialog window in applescript?

how to detect power cycle on philips hue bridge?

How can I detect power outage on bridge? I tried using CLIP sensor daylight's lastupdated object and checked it against none but it does not help. As per meet hue description of 'lastupdated' object, it should none.
"Last time (based on /config/utc) the sensor send state data reflected in the state field. No value change is required to update the field. “none” (asof 1.x.0 null) when not initialized/no recent update has been received since the last bridge power cycle
"
But it always returns as timestamp. Can somebody suggest a way out please?
regards.
You can create a CLIPGenericStatus sensor and set it to a value that is not 0.
When the bridge restarts it will be 0 again.
You don't describe how you want to use this value (read it with by external process or trigger a rule on the bridge), but this is an indicator that you can use.
A Philips support developer recently came up with a solution on the meethue forums.
The idea here is that schedules start running when the bridge boots
and the state of a ClipGenericStatus sensor initiates its status to 0
after a reboot. This might be subject to change.
Create a ClipGenericStatus sensor.
Create a schedule that will change the status of the above ClipGenericStatus sensor to 1 every 10 to 15 seconds.
Create a rule that will do something with the lights when the above ClipGenericStatus sensor is equal to 1. The rule can for example
turn off all lights if the time is between 23.00 and 07.00. Some
downsides are:
It will also trigger when there is a reboot after disconnecting and connecting the powercord manually.
It will also trigger when there is a reboot after bridge firmware update or internal crash.
This isn't a solution for configurable startup behaviour.
Going back to last state, with saving all lightstates to a scene at a specific interval, is not recommended as it will degrade the life
expectancy of the lamps involved.
Link to original post: https://developers.meethue.com/comment/2918#comment-2918

Detecting when an Apple TV 4th generation has woken from sleep

I'm working on some home automation programs and one of the things I want to be able to do is detect when my 4th generation Apple TV has woken from sleep. This will generally only ever happen when someone pressed a button on its Siri remote to wake it up.
I have a PC (connected to the same TV as the Apple TV) that has a Pulse-Eight USB-CEC adapter, so naturally the first thing I tried was using CEC to determine when the Apple TV is awake. Unfortunately it's not reliable, since monitoring the Apple TV's power status to see when it wakes up produces false positives. (I should note that I do not have "Control TVs and Receivers" enabled on the Apple TV, and can't turn it on for the particular project I'm working on because I need the Apple TV to not change the TV's input.)
I'm trying to think of some other way to do this. I'm open to any possibilities, including things like:
Making use of private APIs on the Apple TV
Running an 'always on' program in the background of the Apple TV that sends a signal when the Apple TV wakes up, if that's even possible. (I suspect that it isn't.)
Monitoring the bluetooth communication between the Siri Remote and the Apple TV, if that's possible
Somehow filtering HDMI-CEC commands so that I can turn on 'Control TVs and Receivers', allow the Apple TV's CEC commands for turning on and off the TV, and exclude commands for changing the TV's input.
Any other method, no matter how hacky or ridiculous, as long as it works!
Does anyone have any suggestions? I'm running out of things to try!
I tried to post below on apple discussion / support communities but was told i don't have the right to post this content. Maybe someone in this group can succeed in doing it:
Apple TV 4 CEC integration is great when it works, but it doesn't work all the time and not with all the various equipment out there, you can do a search across forums and you will see lots of unhappy users. I would like to use a raspberry PI to detect when my AppleTV goes to sleep and wakes up and programmatically turn my tv on or off using its RS232C or custom CEC commands.
I used a bonjour services explorer and compared every single result between sleep and on states and there are no differences whatsoever.  I would have expected Apple to welcome such automation projects and make this information readily available with a variable such as status: sleep or status: on. 
Is there a way I could tell the two states apart via the network connection?
If not, could one build a TvOS app which runs on the background and makes this information available to clients somehow?
I finally found a method that seems to work consistently. This method is incredibly hacky and not at all the sort of way I'd prefer to do this, but it's the only one I've found so far that works consistently.
I have taken an old USB webcam and affixed it to the front of my Apple TV so that its lens is directly in front of the Apply TV's front facing light. Whenever the Apple TV is asleep, I simply check for the light turning on by taking images from the camera and analyzing their average luminosity. Since the lens is right next to the light, when it turns on it'll create a huge blown out white circle in the image that's incredibly easy to detect.
As long as the Apple TV is asleep, the light turning on seems to indicate 100% of the time that it has woken up. I have yet to find a single incident of either a false positive or false negative.
Since pressing buttons on the Siri remote causes this light to blink, this also means that I can detect buttons being pressed by looking for changes in the light while the Apple TV is awake. It's not 100% accurate, since some button presses are faster than the frame rate of my crappy old USB webcam, but it works well enough.
I would vastly prefer to find a better method of doing this, like making a request over the LAN to the Apple TV where the response clearly indicates it being awake or asleep, but so far it doesn't look like that's possible.
Here I am, six and a half years later, and I've finally found a better way to get the power state of my Apple TV.
I can simply use pyatv, which has a function named power_state that returns the Apple TV's current power state.

How RealVNC works?

I would to know how RealVNC remote viewer works.
It frequently send screenshots to the client in real time ?
or does it use other approach ?
As a very high-level overview, there are two types of VNC servers:
Screen-grabbing. These servers will capture the current display into a buffer, compare it to the client state, and send only the rectangles that differ to the client.
Hook-assisted. Hooking into the display update process, these servers will be informed when the screen changes by the display manager or OS. They can then use that information to send only the changed rectangles to the client.
In both cases, it is effectively a stream of screen updates; however, only the changed regions of the screen are transmitted to the client. Depending on the version of the VNC protocol in use, these updates may be compressed as well.
(Note that the client is free to request a complete screen update any time it wants to, but the server will only do this on its own if the entire screen is changed.)
Also, screen updates are not the only things transmitted. There are separate channels that the server can use to send clipboard updates and mouse position updates (since a user physically at the remote machine may be able to move the mouse too).
The display side of the protocol is
based around a single graphics
primitive: “put a rectangle of pixel
data at a given x,y position”. At
first glance this might seem an
inefficient way of drawing many user
interface components. However,
allowing various different encodings
for the pixel data gives us a large
degree of flexibility in how to trade
off various parameters such as network
bandwidth, client drawing speed and
server processing speed. A sequence of
these rectangles makes a framebuffer
update (or simply update). An update
represents a change from one valid
framebuffer state to another, so in
some ways is similar to a frame of
video. The rectangles in an update are
usually disjoint but this is not
necessarily the case.
Read here to find out more how it works
Yes. It just sends some sort of screenshots (compressed and which reuses unchanged portions of the previous screenshot).
This is by the way the VNC protocol, any client work that way (although the actual way to compress images etc etc may change).
Essentially the server sends Frame Buffer Updates to the client and the client sends keyboard and mouse input and frame buffer update requests to the server.
Frame Buffer Update messages can have different encodings, but in essence they are different ways of representing square screen areas of pixel data. Generally the client asks for Frame Buffer Updates for the entire screen but it can ask for just an area of the screen (for example, small screen clients showing a viewport of the servers screen). The server then sends a FBU (frame buffer update) that contains rectangles where the screen has changed since the last FBU was sent to the client.
The best reference for the RFB/VNC protocol is here. The IETF has a recent (2011) standards document RFC 6143 that covers RFB although it is not an extensive as the reference guide.
It essentially works by sending screenshots on the fly. ("Real time" is something of a misnomer here in that there is no clear deadline.) It does attempt to optimize by only sending areas of the screen that have changed, and some forks of the VNC code line use a mirror driver to receive notification when areas of the display are written to, while others use window message hooks to detect repaint requests.

Resources