I just need to get a number based on an external source (I can't store any state) that increments by about 1 every 100 millisecond. Precision isn't super important, this is just to have a display that cycles through the rainbow. Right now I have something like this
let time = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).ok()?.as_millis();
let hue = (time/100) % 36 * 10;
Some(Color::hsva(hue as f32, 1., 0., 1.))
Is this the cheapest way to do it or are there ways that take less time? This function needs called a lot so I don't want it to be slow.
Related
I've got an ESP32-C3 and the following Rust code. Retaining the frequency but decreasing the resolution, I expected the accuracy of the pulse width to decrease but it unfortunately also decreases the period (speeds up). I expected the period to stay the same as I thought it is fully determined by the frequency and not by the resolution.
let pwm_pin = peripherals.pins.gpio10.into_output().unwrap();
let config = TimerConfig::default().frequency((50.Hz()).into()).resolution(Resolution::Bits13);
let timer = Timer::new(peripherals.ledc.timer0, &config)?;
let mut channel = Channel::new(peripherals.ledc.channel0, &timer, pwm_pin)?;
// set duty cycle to 50%
let percent = 50;
let duty = ((channel.get_max_duty() as f64 / 100.0) * percent as f64) as u32;
channel.set_duty(duty);
13bit produce a PWM period of 20ms, duty cycle of 50% and a high pulse width of 10ms.
Decreasing the resolution to Resolution::Bits8 has the unexpected effect of not just reducing the duty cycle granularity but also decreasing the period.
8bit produce a PWM period of 250us, duty cycle of 50% and a high pulse width of 124us.
How do I calculate the period from any given frequency and resolution. All I found online is references to period = 1/frequency which does not take the resolution into account.
Someone on GitHub lifted the veil on what is going on here...
A simple reverse calculation:
8bit -> 255 = 250us | x = 20ms => x = y > * 20ms/250us => y = 80 & x = 20400. Because 20400 > 13Bits our initial assumption that 255 = 250us was not true but already a bit lower. but
its still a factor of 80 down. the problem with low frequency's
compared to high input clocks is you either need a high resolution
counter ( eg many bits) or divide down your clock source further
otherwise you fill your register to fast. ( counting to 255 is faster
than counting to 20400 with the same counting speed) but sometimes you
are hardware limited and can not reach all desired output speeds ( you
are already dividing down your input clock by max divider). So if it
hits a hardware limit and cant translate it it has to for example rise
the frequency respectively.
Link to the discussion https://github.com/esp-rs/esp-idf-sys/issues/154
I am gathering data on a device, and after every second, I update a count and log it. I am now processing it, and am new to python, so I had a question as to whether it was possible to convert a numbered array [0,1,2,3,4,...1091,1092,1093,...] into a timestamp [00:00:01, 00:00:02, 00:00:03, 00:00:04, ... 00:18:11, 00:18:12, 00:18:13,...] for example.
If you could please lead me in the right direction, that would be very much appreciated!
p.s. In the future, I will be logging the data as a timestamp, but for now, I have 5 hours' worth of data that needs to be processed!
import datetime as dt
timestamp=[0,1,2,3,4,5,1092,1093]
print([dt.timedelta(seconds=ts) for ts in timestamp])
Happy Coding
If all you have is seconds, then you can just do simple arithmetic to convert them to minutes and hours:
inp = [0, 1, 2, 3, 4, 1091, 1092, 1093]
outp = [f'{secs // 3600:02}:{(secs // 60) % 60:02}:{secs % 60:02}' for secs in inp]
print(outp)
# ['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04', '00:18:11', '00:18:12', '00:18:13']
Here, I use a list comprehension and, for each secs in the input, create a format string:
hours is secs // 3600 (that's integer floor division), because one hour is 3600 seconds
Minutes is (secs // 60) % 60 (this incorporates the modulo operator, which displays the remainder of secs // 60 after dividing it by 60 again). One minute is 60 seconds, but more than 60 minutes would be an hour, so we need to make sure to 'roll over' the counter every 60 minutes (which is what the mod is for).
Seconds is, of course, secs % 60, because a minute has 60 seconds and we want the counter to roll over.
The format string starts with f', and anything inside {} is an instruction to evaluate whatever's inside it, and insert that into the string. The syntax is {expression:format}, where display is an optional instruction for how to format the data (i.e. not just printing it out). And format can get complicated (look up a python f-string tutorial if you're curious about the specifics), but suffice it to say that in this case we use 02, which means that we want the output to be two characters in length, and padded with zeroes in case it's less than that.
I need little help. If I have 30 random sample with mean of 52 and variance of 30 then how can i calculate the 95 % confidence interval for the mean with estimated and true variance of 30.
Here you can combine the powers of numpy and statsmodels to get you started:
To produce normally distributed floats with mean of 52 and variance of 30 you can use numpy.random.normal with numbers = np.random.normal(loc=52, scale=30, size=30) where the parameters are:
Parameters
----------
loc : float
Mean ("centre") of the distribution.
scale : float
Standard deviation (spread or "width") of the distribution.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn. Default is None, in which case a
single value is returned.
And here's a 95% confidence interval of the mean using DescrStatsW.tconfint_mean:
import statsmodels.stats.api as sms
conf = sms.DescrStatsW(numbers).tconfint_mean()
conf
# output
# (36.27, 56.43)
EDIT - 1
That's not the whole story though... Depending on your sample size, you should use the Z score and not t score that's used by sms.DescrStatsW(numbers).tconfint_mean() here. And I have a feeling that its not coincidental that the rule-of-thumb threshold is 30, and that you have 30 observations in your question. Z vs t also depends on whether or not you know the population standard deviation or have to rely on an estimate from your sample. And those are calculated differently as well. Take a look here. If this is something you'd like me to explain and demonstrate further, I'll gladly take another look at it over the weekend.
Consider the following piece of C++ code:
string s = "a";
for (int i = 0; i < n; i++) {
s = s + s; // Concatenate s with itself.
}
Usually, when analyzing the time complexity of a piece of code, we would determine how much work the inner loop does, then multiply it by the number of times the outer loop runs. However, in this case, the amount of work done by the inner loop varies from iteration to iteration, since the string being built up gets longer and longer.
How would you analyze this code to get the big-O time complexity?
The time complexity of this function is Θ(2n). To see why this is, let's look at what the function does, then see how to analyze it.
For starters, let's trace through the loop for n = 3. Before iteration 0, the string s is the string "a". Iteration 0 doubles the length of s to make s = "aa". Iteration 1 doubles the length of s to make s = "aaaa". Iteration 2 then doubles the length of s to make s = "aaaaaaaa".
If you'll notice, after k iterations of the loop, the length of the string s is 2k. This means that each iteration of the loop will take longer and longer to complete, because it will take more and more work to concatenate the string s with itself. Specifically, the kth iteration of the loop will take time Θ(2k) to complete, because the loop iteration constructs a string of size 2k+1.
One way that we could analyze this function would be to multiply the worst-case time complexity of the inner loop by the number of loop iterations. Since each loop iteration takes time O(2n) to finish and there are n loop iterations, we would get that this code takes time O(n · 2n) to finish.
However, it turns out that this analysis is not very good, and in fact will overestimate the time complexity of this code. It is indeed true that this code runs in time O(n · 2n), but remember that big-O notation gives an upper bound on the runtime of a piece of code. This means that the growth rate of this code's runtime is no greater than the growth rate of n · 2n, but it doesn't mean that this is a precise bound. In fact, if we look at the code more precisely, we can get a better bound.
Let's begin by trying to do some better accounting for the work done. The work in this loop can be split apart into two smaller pieces:
The work done in the header of the loop, which increments i and tests whether the loop is done.
The work done in the body of the loop, which concatenates the string with itself.
Here, when accounting for the work in these two spots, we will account for the total amount of work done across all iterations, not just in one iteration.
Let's look at the first of these - the work done by the loop header. This will run exactly n times. Each time, this part of the code will do only O(1) work incrementing i, testing it against n, and deciding whether to continue with the loop. Therefore, the total work done here is Θ(n).
Now let's look at the loop body. As we saw before, iteration k creates a string of length 2k+1 on iteration k, which takes time roughly 2k+1. If we sum this up across all iterations, we get that the work done is (roughly speaking)
21 + 22 + 23 + ... + 2n+1.
So what is this sum? Previously, we got a bound of O(n · 2n) by noting that
21 + 22 + 23 + ... + 2n+1.
< 2n+1 + 2n+1 + 2n+1 + ... + 2n+1
= n · 2n+1 = 2(n · 2n) = Θ(n · 2n)
However, this is a very weak upper bound. If we're more observant, we can recognize the original sum as the sum of a geometric series, where a = 2 and r = 2. Given this, the sum of these terms can be worked out to be exactly
2n+2 - 2 = 4(2n) - 2 = Θ(2n)
In other words, the total work done by the body of the loop, across all iterations, is Θ(2n).
The total work done by the loop is given by the work done in the loop maintenance plus the work done in the body of the loop. This works out to Θ(2n) + Θ(n) = Θ(2n). Therefore, the total work done by the loop is Θ(2n). This grows very quickly, but nowhere near as rapidly as O(n · 2n), which is what our original analysis gave us.
In short, when analyzing a loop, you can always get a conservative upper bound by multiplying the number of iterations of the loop by the maximum work done on any one iteration of that loop. However, doing a more precisely analysis can often give you a much better bound.
Hope this helps!
I want to plot the frequency spectrum of a music file (like they do for example in Audacity). Hence I want the frequency in Hertz on the x-axis and the amplitude (or desibel) on the y-axis.
I devide the song (about 20 million samples) into blocks of 4096 samples at a time. These blocks will result in 2049 (N/2 + 1) complex numbers (sine and cosine -> real and imaginary part). So now I have these thousands of individual 2049-arrays, how do I combine them?
Lets say I do the FFT 5000 times resulting in 5000 2049-arrays of complex numbers. Do I plus all the values of the 5000 arrays and then take the magnitude of the combined 2049-array? Do I then sacle the x-axis with the songs sample rate / 2 (eg: 22050 for a 44100hz file)?
Any information will be appriciated
What application are you using for this? I assume you are not doing this by hand, so here is a Matlab example:
>> fbins = fs/N * (0:(N/2 - 1)); % Where N is the number of fft samples
now you can perform
>> plot(fbins, abs(fftOfSignal(1:N/2)))
Stolen
edit: check this out http://www.codeproject.com/Articles/9388/How-to-implement-the-FFT-algorithm
Wow I've written a load about this just recently.
I even turned it into a blog post available here.
My explanation is leaning towards spectrograms but its just as easy to render a chart like you describe!
I might not be correct on this one, but as far as I'm aware, you have 2 ways to get the spectrum of the whole song.
1) Do a single FFT on the whole song, which will give you an extremely good frequency resolution, but is in practice not efficient, and you don't need this kind of resolution anyway.
2) Divide it into small chunks (like 4096 samples blocks, as you said), get the FFT for each of those and average the spectra. You will compromise on the frequency resolution, but make the calculation more manageable (and also decrease the variance of the spectrum). Wilhelmsen link's describes how to compute an FFT in C++, and I think some library already exists to do that, like FFTW (but I never managed to compile it, to be fair =) ).
To obtain the magnitude spectrum, average the energy (square of the magnitude) accross all you chunks for every single bins. To get the result in dB, just 10 * log10 the results. That is of course assuming that you are not interested in the phase spectrum. I think this is known as the Barlett's method.
I would do something like this:
// At this point you have the FFT chunks
float sum[N/2+1];
// For each bin
for (int binIndex = 0; binIndex < N/2 + 1; binIndex++)
{
for (int chunkIndex = 0; chunkIndex < chunkNb; chunkIndex++)
{
// Get the magnitude of the complex number
float magnitude = FFTChunk[chunkIndex].bins[binIndex].real * FFTChunk[chunkIndex].bins[binIndex].real
+ FFTChunk[chunkIndex].bins[binIndex].im * FFTChunk[chunkIndex].bins[binIndex].im;
magnitude = sqrt(magnitude);
// Add the energy
sum[binIndex] += magnitude * magnitude;
}
// Average the energy;
sum[binIndex] /= chunkNb;
}
// Then get the values in decibel
for (int binIndex = 0; binIndex < N/2 + 1; binIndex++)
{
sum[binIndex] = 10 * log10f(sum[binIndex]);
}
Hope this answers your question.
Edit: Goz's post will give you plenty of information on the matter =)
Commonly, you would take just one of the arrays, corresponding to the point in time of the music in which you are interested. The you would calculate the log of the magnitude of each complex array element. Plot the N/2 results as Y values, and scale the X axis from 0 to Fs/2 (where Fs is the sampling rate).