I am developing a digital delay on a microcontroller and I am stuck with the delay decay. The delay is implemented with a comb filter.
Here it is: http://www.tonmeister.ca/main/textbook/intro_to_sound_recording837x.png
The delay line, "emulating the tape", is implemented as a circula buffer. The effect can be killed and such case does not represents an issue; when turning the effect off though, I have the tail of the delay left in the buffer to process, as if the delay had been frozen and the tail slowly decay (depending on the feedback gain).
My question is: how many times I have to recirculate samples through the buffer?
One way I thought to approach this could be by modelling the physical process ... assuming that the input sequence has a loudness of 0dB for its entire duration and that, after going through the delay line, it gets attenuated by a factor of 1/10. In terms of loudness this corresponds to a drop of 20dB, as power = voltage^2, every time the sequence goes through the feedback path. The weakest audible sound has a loudness of −130dB but, taking into consideration the ambient noise as well, −120dB will be sufficient as the least reference power. Hence, after the echoes have been through the feedback path 6 times (120dB/20dB) they will be no longer audible.
Is there a more efficient way?
Thank you!
Related
I have a signal that checks if the data is available in memory block and does some computation/logic (Which is irrelevant).
I want a signal called "START_SIG" to go high X-time (nanoseconds) before the first rising edge of the clock cycle that is at 10 MHz Frequency. This only goes high if it detects there is data available and does further computation as needed.
Now, how can this be done? Also, I cannot set a delay since this must be RTL Verilog. Therefore, it must be synthensizable on an FPGA (Artix7 Series).
Any suggestions?
I suspect an XY problem, if start sig is produced by logic in the same clock domain as your processing then timing will likely be met without any work on your part (10MHz is dead slow in FPGA terms), but if you really needed to do something like this there are a few ways (But seriously you are doing it wrong!).
FPGA logic is usually synchronous to one or more clocks,generally needing vernier control within a clock period is a sign of doing it wrong.
Use a {PLL/MCM/Whatever} to generate two clocks, one dead slow at 10Mhz, and something much faster, then count the fast one from the previous edge of the 10MHz clock to get your timing.
Use an MCMPLL or such (platform dependent) to generate two 10Mhz clocks with a small phase shift, then gate one of em.
Use a long line of inverter pairs (attribute KEEP (VHDL But verilog will have something similar) will be your friend), calibrate against your known clock periodically (it will drift with temperature, day of the week and sign of the zodiac), this is neat for things like time to digital converters, possibly combined with option two for fine trimming. Shades of ring oscs about this one, but whatever works.
I thought it would be a simple task ...
- platform: linux on laptop
- language: python
- object: generate tone to be heard on speaker or headphones. Tone will be modified in real-time, many times per second (think about a metal-finder)
Initial design was to generate a tone in python and pipe it to aplay().
Since aplay consume data at a known rate (the sampling rate), I thought my tone generator would not have to care about timing if silences (between tones) where generated at normal sampling rate (null amplitude).
First result show an important time lag (many seconds). I found that the pipe is fairly long by default (64KB). That's 8 seconds of samples (at 8Khz).
I found a way to reduce the pipe size at 4KB but it is still too long (0.5s lag).
Sampling at a very high frequency would reduce the lag but I don't like that solution.
Second approach was to generate a real silence (no sample) during a silence and the generator would sleep() during the silence.
Result is that aplay complains about underrunning and, for some reason, the tones were truncated and mishandled (bad rendering).
So, my question is:
What are the best ways to send a tone to the audio stack without piping?
I want to use snd_pcm_delay() to query the delay until the sample I am about write to the ALSA buffer are hearable. I expect this value to vary between individual calls. Though, on two system this value is constant. The function returns a value that is always equal to the period size on one platform and on the other platform it is equal to the buffer size (two times the period size in my code).
Is my understanding of snd_pcm_delay() wrong? Is it a driver problem?
The delay is proportional to the number of samples in the buffer (the inverse of snd_pcm_avail()), plus a time that describes how much time is needed to move samples from the buffer to the speakers. The latter part is driver dependent and might not be implemented.
If the device takes out samples one entire period at a time (some DMA controllers have no better granularity for reporting the current position), then the delay value will appear to stay constant for a time, and then jump by an entire period. And you see that jump only before you have re-filled the buffer.
I need to tune PI(D) gains in a system which has a quite large delay. It's a common temperature controller, but the temperature probe is far away from the heater. Some further info:
the response of the probe is delayed about 10 seconds from any change on the heater
the temperature is sampled # 1 Hz, with a resolution of 0.01 °C
the heater is controller in PWM with a period of 1 Hz, with a 10-bit PWM
the goal is to maintain the oscillation below ±0.05 °C
Currently I'm using the controller as PI. I can't avoid oscillations. The higher the gain, the smaller and faster the oscillations. Still too high (about ±0.15 °C).
Reducing the P and I gains leads to very long and deep oscillations.
I think this is due to the delay.
The settling time is not a problem, it may take all the time it needs.
I'm puzzling over how get the system to work. Let's think to use only I. When the probe reaches the target value and the I output starts to decrease, the temperature will rise for some other time. I cannot use the derivative term because the variations are too slow and the dError is very close to zero (if I set the dGain to a huge value there is too much noise).
Any idea?
Try P-only. How fast are the proportional-only oscillations? If you can't tune Kp small enough to get no oscillations, then your heater is overpowered for your system.
If the dead time of the of the system is on the order of 10s, the time constant (T_i) for the Integral term should be 3.3 times the dead time, using a Ziegler Nichols open-loop PI rule ( https://controls.engin.umich.edu/wiki/index.php/PIDTuningClassical#Ziegler-Nichols_Open-Loop_Tuning_Method_or_Process_Reaction_Method: ) , and then Integral term should be Ki = Kp/T_i. So with deadtime = 10s, then Ki should be Kp/33 or slower.
If you are getting integral-only oscillations, then the integral is winding up and down quicker than the process responds, and it should be even smaller.
Also -- think of the units of the different terms. It might not be the delay causing your problems so much as the resolution of the measurement and control systems. If you're driving a (for example) 100W heater with a 1/1024 resolution PWM, you've got 0.1W resolution per PWM count that you are trying to adjust based on 0.01C temperature differences. At less than Kp = 100 PWMcount/degree (or 10W/degree) you don't have enough resolution in the PWM to make changes in response to a 0.01C error. At a Kp=10PWM/C you might need a 0.10C change to result in an actual change in the PWM power. Can you use a higher resolution PWM?
Thinking of it the other way, if you want to operate a system over a range of 30C at 0.01C, I'd think you would want at least a 15bit PWM to have 10 times the resolution in the controlled system. With only 10 bits of PWM you only get about 1C of total range with control at 10x the resolution of the measurements.
Normally for large delays you have two options: Lower the gains of the system or, if you have a model of the plant you are controlling, use a Smith Predictior.
I would start by modelling your system (using open-loop steps in the input) to quantify the delay and the time constant of your plant, then check if the sampling of the temperature and the PWM rate are OK.
Notice that if your PWM frequency is too small in comparison to the plant dynamics, you will have sustained oscillations because of the slow PWM. You can check it using just an constant input to your PWM (with no controllers, open loop).
EDIT: Didn't see that the problem was already solved, but I'll leave this here for reference.
ADPCM is adaptive, so it has varible sample rate. But does it have some average rate or something? Does it have frames of fixed time duration?
You misunderstood it here :-). "Adaptive" doesn't mean that sample rate is adjusted according to the signal it contains.
"Adaptive" means that the limited available delta steps (4Bit = only 16 possibilities to encode a sample) are adapted to the signal by prediction. It attempts to approximate from a given sample which value the next sample may have and adapts the delta steps to that.
If the signal has less change from sample to sample, the steps are chosen closer togheter than if the signal has much change. It is very unlikely that the signal goes from very oscillating to quiet from one sample to the next.
You notice that behavior if you encode a square wave with 100Hz using such algorithm and re-open it in an audio editor that makes the waveform visible. When the waveform changes from one polarity to other, the signal "speeds up" (the steps are more and more apart) until it reaches the other end and then it slows down again (The steps are more and more close togheter).
It still has a fixed sample rate. The one you will give to it. In RIFF WAVE, the sample rate is stored in the header.