Check runtime on hue bulbs? - philips-hue

Are there any way to check how many hours a philips hue bulb have been used? First use, and number of cycles? Would think this is recorded somehow.
I need to check a set I bought as new, but I suspect it's been used :) also it would be interesting to get se stats from the bulbs.

No, there is no way to get that information from the bulb or the hub. Hue lights haven't been around that long and they last a long time. Don't worry, just enjoy your bulbs.

Related

Recognize specific ringtone

What I want is to be able to get a signal at my raspberry pi at home when I'm not at home so I can e.g. wake up my PC. I always have an old phone lying around that I never really use. So I thought, I can call my phone, a specific mp3 ringtone plays, my raspberry pi listens and recognizes the ringtone and therefore the signal. So I can pretty much chose whatever ringtone I want (but hopefully a not too long one). But the problem is, that it should be recognizable by the raspberry and it should be distinguishable from other sounds. At best I can play random music at home and it will not get signalled until it's the specific ringtone i chose.
So I'm at the very beginning of the project and I have a lot of question. Is this even feasible? How do I listen to the ringtone? Should I use a normal microphone or could I e.g. trigger some gpio pin as long as a specific frequency is played? What kind of ringtone should I use to be as distinguishable as possible? And how to create the software to recognize the sound?
I know this is a lot and I don't expect a step by step solution. But maybe you got some hints to get me in the right direction?
If someone has a similar problem, I found a solution: First I had to choose between a mostly hardware solution and a mostly software solution. The hardware solution is to filter specific frequencies. This seems to be pretty hard using normal band-pass filters if you want narrow bands. There are also components that can do that, now I know of the NE567. But this component only reacts to one frequency and takes quite a lot of energy. To recognize a ringtone, more of these components are needes which means more power consumption. Additionally this solution is pretty unflexible.
So I went for the software solution. Now I have an Arduino Uno that gets an amplified electret microphone signal at an analog input pin. The data is collected and simultaneously analysed with an FFT algorithm. Then I check the dominant frequency if there is any and safe it in an array. Everytime a got a new data point I compare the array with the pattern of my ringtone and calculate a score for the match. If the score is big enough the ringtone is "found" and I can trigger my event.
I'm actually pretty pleased with the solution because it works quite well even with the phone some feet away from the microphone. I thought I need to put the microphone almost directly next to the phone to get good results, but I dont have to. It's still a little sensitive, because the sound volume shouldnt be too high or to low. But with the right volume settings it works with a quite big area when the phone is in the same room. It works even better with some space between microphone and phone, because the phones radiation from the call seems to disturb the circuit quite a lot. There is also the problem, that other noises block the ringtone recognition. I could compensate that with my algorithm, but I almost used up all resources of the Arduino, so I had to keep the algorithm simple. But in my case I dont have a noisy environment, so this is not a problem for me. Another pro is that my event was never triggered from another sound and it seems almost impossible that this could happen by accident.
So it is feasible and I think its actually a quite elegant solution. I also thought about a vibration detection or even directly using the vibration motor's signal but I have no control over the vibration function of that old phone. But I can chose the ringtone for every contact, so I only gave the "magic" ringtone to myself and so the event can only be triggered by myself. I only have to say, that writing the software was kind of hard with the Arduinos limitations. Because I need the data in real time I have limited time for the calculation. I had to limit the incomping data and therefore I can only listen to frequencies up to 10kHz. But the ringtone recognition is still possible and I think it was worth the effort. :)

How to use python to look how much light is outside

I made a Program on GitHub called: https://github.com/bigboy32/MacOSmojave
But the only issue is it is doing it based on the time, not based on the sunlight
So what is the Error/Question
let's say it is 7 pm and it is light outside and the pc is showing 'An Evening' background!
So that's my Question
Thanks to all the answers!
You can use any weather measurement service with opens API to retrieve the information you want, like this one
This API returns the UVI values, you can interprete this and create the logic, since the UVI will be lower when the sun is going down.
Easy! Get a Raspberry Pi Zero W for $12 and attach a Light Dependent Resistor to it. Read the light once a minute and send the reading to your computer via WiFi.

How to use Bluetooth Low Energy badge for localization?

I know, this question has been asked a lot of times. Until yesterday i thought that the answer was "yes, it is possible but you can not obtain an accurate result of your position". My idea is to take a BLE badge in my hand and with other 4 devices, positioned on the ceiling, obtain my current position using the trilateration. After weeks of resarch, i concluded that this method could not be as accurate as i'd like it to be, so i went over.
Now, what about this video? Youtube by Loopd.
They use bluetooth badges, but how they obtain these results?
Thanks to everyone
The results of Bluetooth LE indoor location can be quite accurate, but it requires some processing of the raw signals rather than simple triangulation. Essentially you weight different beacons differently in your position calculation based on how far away they are and filter to smooth the result.
There is a working example as open source at http://vor.space/

How can I synchronize two audio recordings *without* timestamps?

Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.

Nexys3 interface to a VmodTFT

I'm trying to interface a Nexys3 board with a VmodTFT via a VHDCI connector. I am pretty new to FPGA design, and although I have experience with micro-controllers. I am trying to approach the whole problem as a FSM. However, I've been stuck on this for quite some time now. What signals constitute my power up sequence? When do I start sampling data? I've looked at the relevant datasheets and they don't make things very clearer. Any help would be greatly appreciated (P.S : I use Verilog for the design).
EDIT:
Sorry for the vagueness of my question. Here's specifically what I am looking at.
For starters, I am going to overlook the touch module. I want to look at the whole setup as a FSM. I am assuming the following states:
1. Setup connection or handshake signals
2. Switch on the LCD
3. Receive pixel data
4. Display video
5. Power off the LCD
Would this be a reasonable FSM? My main concerns are with interpreting the signals. Table 5 in the VmodTFT_rm manual shows a list of signals; however, I am having trouble understanding what signals are for what (This is my first time with display modules). I am going to assume everything prefixed with TFT_ is for the display and everything with TP_ is for the touch panel (Please correct me if I'm wrong). So what signals would I be changing in each state and what would act as inputs?
Now what changes should I make to accommodate the touch panel too?
I understand I am probably asking for too much, but I would greatly appreciate a push in the right direction as I am pretty stuck with this for a long time.
Your question could be filled out a little better, it's not clear exactly what's giving you trouble.
I see two relevant docs online (you may have seen these):
Schematic: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_sch.pdf
User Guide: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_rm.pdf
The user guide explains what signals are part of the Power up sequence
you must wait between 0.5ms and 100ms after driving TFT-EN before you can drive DE and the pixel bus
You must wait 0 to 200ms after setting up valid pixel data to enable the display (with DISP)
You must wait 160ms after enabling DISP before you start pulsing LED-EN (PWM controls the backlight)
Admittedly the documentation doesn't look great and some of the signals names are not consistent, but I think you can figure it out from there.
After looking at the user guide to understand what the signals do, look at the schematic to find the mapping between the signal names and the VHDCI pinout. Then when you connect the VHDCI pinout to your FPGA, look at your FPGA's manual to find mapping between pins on the VHDCI connector and balls of the FPGA, and then you can use the fpga's configuration settings to map an FPGA ball to a logical verilog input to your top module.
Hope that clears things up a bit, but please clarify your question about what you don't understand.

Resources