Nest API humidity % round to nearest 5%? - nest-api

using Nest API v1.1, it seems that humidity % are round to the nearest 5%. e.g a level of 49% humidity becomes 50%, 53% -> 55%, etc..
I wonder why this would be. anyone who has the same results?

Humidistats are generally only accurate to 3-7% RH, so it would make sense to round the output to prevent developers from building logic around smaller changes.
Nest doesn't publish the accuracy rating of their humidity sensor though.

Related

Calculate center frequency for a given signal in frequency domain

I have a signal (points in frequency domain (nanometers, then converted to tera hertz; along with magnitude level in mW). My signal looks like the attached pic. I would like to know a way to calculate the center frequency.
One theory suggests finding the -3dB cutoff frequencies on both ends. However, I could not find how to do that. So, please tell me how to calculate the -3dB cutoff frequencies so that I can apply te following formula- (f1+f2)/2
or suggest me a better way of finding the center frequency.
You could perform this measurement as an OBW measurement. - 3dB is when you half the total signal power by 50%. (in Watts)
The way to do it manually is to get the whole signal spectrum in an excel table, for example 1000 points, measure the total power, Ptot, and start adding the power by the lowest frequency until you reach 25% of Ptot. The frequency at that point will be Flow. Do the same, but starting from the top frequency until you reach 25% of Ptot. it will be Fhigh. The center will be (Flow + Fhigh)/2.
Sorry if it's not very clear but if you look for OBW measurements You should find better explanations on the net. Most of modern spectrum analyzers have this function built in.

Possible to find velocity of person in video or camera using openpose

Question is, I want to calculate the speed of my arm for Slap detection. So I am using openpose to get the body points (here total points: 25) using body_25 model and using this along with the time I want to deduce the speed of my arm, i googled through openpose, stackoverflow, github.But could not succeed?
Velocity = Distance / Time = dx/dt
dx = frame3_bodypoints - frame_1_bodypoints;
dt = ?
I don't know how to find this from the openpose, is there a way I can find this? Any thoughts, would be great help!
I've never used OpenPose. But Newtonian physics would indicate that a slap corresponds to a sudden change in velocity of the hand.
I think it's a reasonable first approximation to assume that the Δt between frames is constant. Instantaneous variation in frame rate is called jitter. I would expect jitter to be small for modern recording devices. In any case, I don't know how to get instantaneous frame rate with the tools (OpenCV, PIL) that I am familiar with. I couldn't find any references to frame rate or time in the OpenPose docs.
For calculating velocity and delta-velocity, you have choices. Straight up linear velocity of the hand might be the easiest. For position changes use the geometric mean of positions (Δs = sqrt((x2-x1)^2 + (y2-y1)^2).
You could also calculate an angular velocity between the hand and the elbow, but that would be a little more involved and prone to noise.

How do I calculate confidence interval with only sample size and confidence level

I'm writing a program that lets users run simulates on a subset of data, and as part of this process, the program allows a user to specify what sample size they want based on confidence level and confidence interval. Assuming a p value of .5 to maximum sample size, and given that I know the population size, I can calculate the sample size. For example, if I have:
Population = 54213
Confidence Level = .95
Confidence Interval = 8
I get Sample Size 150. I use the formula outlined here:
https://www.surveysystem.com/sample-size-formula.htm
What I have been asked to do is reverse the process, so that confidence interval is calculated using a given sample size and confidence level (and I know the population). I'm having a horrible time trying to reverse this equation and was wondering if there is a formula. More importantly, does this seem like an intelligent thing to do? Because this seems like a weird request to me.
I should mention (just to be clear) that the CI is estimated for the mean, not the population. In that case, if we assume the population is normally distributed and that we know the population standard deviation SD, then the CI is estimated as
From this formula you would also get your formula, where you are estimating n.
If the population SD is not known then you need to replace the z-value with a t-value.

Verify transmit power to be within certain limits of its expected value over 95% of test measurements

I have a requirement where I have to verify the transmit power out of a device as measured at its connector is within 2 dB of its expected value over 95% of test measurements.
I am using a signal analyzer to analyze the transmitted power. I only get the average power value, min, max and stdDev of the measurements and not the individual power measurements.
Now, the question is how would I verify the "95% thing" using average power, min, max and stdDev. It seems that I can use normal distribution to find the 95% confidence level.
I would appreciate if someone can help me on this.
Thanks in anticipation
The way I'm reading this, it seems you are a statistical beginner, so if I'm wrong there, the rest of this answer will probably be insultingly basic, and I'm sorry.
Anyway, the idea is that if a dataset is normally distributed, and all the observations are independent of one another, then 95% of the data points will fall within 1.96 standard deviations of the mean.
Do you get identical estimates of average power every time you measure, or are there some slight random differences from reading to reading? My guess is that it's the second. If you were to measure the power a whole bunch of times, and each time you plotted your average power value on a histogram, then that histogram of sample means would have the shape of a bell curve. This bell curve of sample means would have its own mean and standard deviation, and if you have thousands or millions of data points going into the calculation of each average power reading, it's not horrible to assume that it is a normal distribution. The explanation for this phenomenon is known as the 'central limit theorem', and I recommend both the Khan academy's presentation of it as well as the wikipedia page on it.
On the other hand, if your average power is the mean of some small number of data points, like for instance n= 5, or n= 30, then assumption of a normal distribution of sample means can be pretty bad. In this case, your 95% confidence interval around the average power goes from qt(0.975,n-1)*SD/sqrt(n) below the average to qt(0.975,n-1)*SD/sqrt(N) above the average, where qt(0.975,n-1) is the 97.5th percentile of the t distribution with n-1 degrees of freedom, and SD is your measured standard deviation.

Real-Time FFT with High Resolution while Keeping Latency Low

I have read all the wikipedia articles and stackoverflow articles on fft and resolution. However, nothing has helped in learning how to get high resolution frequency without having a huge latency issues.
If I understand signal processing correctly:
I have a sampling rate of 44,100, and I take 256 block. Then the frequency resolution would be 44,100/2/256 = 86.1 Hz per frequency bin with FFT.
Constantly I see examples like http://www.tunelab-world.com/, and http://www.spectraplus.com/ that are able to determine the frequency down to .01 Hz.
If I did that with my above method I would need 4410,000 bins to get that kind of resolution. At 44,100 sampling rate it would take 100 seconds to fill in the data from the input.
I know I am missing something, but I can't figure what.
How can I get a signal, and then draw a graph or display the frequency of a peak with that kind of accuracy without taking a gazillion bins or waiting forever?
Thanks in advance for your help!
If you want a high frequency resolution FFT output, you have to perform the FFT over many samples: there is simply no way round that.
What you are probably seeing in other applications is overlapping: they may do a 4096 pt FFT on the first set of data, then move along 256 samples and do another 4096 pt FFT (on 3840 of the samples they have already used, plus a new 256 samples).
This allows you to show regular (different) updates with a fine frequency resolution. It will be no good for capturing transient signals, but looks good on an active display.
The reason you can get better accuracy is that the frequency estimation problem lends itself to being solved with higher accuracy than many other estimation problems.
The Cramer-Rao Lower Bound (CRLB) on the accuracy is given by:
which means that the variance of the frequency estimate (a measure of the expected error) goes down as the cube of T, the duration of the measurements. "Normal" estimation problems tend to have this measure go down as the square of T.
Using the FFT maximizer (the bin with the largest peak) will only get you the square of T.
As Adrian Taylor says, the examples you give are probably starting with a higher number of samples and then updating by a shorter duration.
For kicks, there are some frequency estimation algorithms here that might be of interest. They are quicker than the FFT, and more accurate.
SpectraPlus says "High Resolution FFT Analysis up to 1,048,576 pts"; that won't get you to 0.01 Hz resolution at 44.1 kHz.
TuneLab seems to go down to 0.01 cents, but the "spectrum display" appears to have a resolution of around 2.5 Hz at 440 Hz. The "phase display" is nothing special.
What are you trying to do? If you merely want to implement a guitar tuner, you don't need (and probably don't want) an FFT. Not knowing any better, I'd go for a PLL.

Resources