I have a class aggregating sound in real time. Aggreation means it calculates sound parameters like mean amplitude, noise level and so on for a time units, longer than sound frames. Frames are PCM and last less than millisecond, while aggregation units are 1/10 of second and longer.
I wish to draw UML state machine diagram of this class.
It consists of two smaller state diagrams, one tracking frames, other tracking aggregation units, by cycles. If first diagram detects full frame received, it should kick second diagram, where frame is processed and aggreagation data for one unit is updated.
I drew the picture below.
My question is: how to draw that first machine transition from full frame state to initial state initiates of induces the transition in the second machine?
I entitled question transition on second diagram as "frame".
What you need is that the first machine sends an event to the second when Full frame received is entered. There are several ways to do it. You could add an effect on the transition leading to Full frame received, or, you could also define an "entry action" for Full frame received.
Once you send an event upon entering Full frame received, you can define in the second state machine a transition reacting to it.
In the UML Spec. 2.4.1, "entry action" is described on page 561, effect of transitions on page 581.
Related
Could anybody please help me with designing state space graph for Markov Decision process of car racing example from Berkeley CS188.
car racing example
For example I can do 100 actions and I want to run value iteration to get best policy to maximize my rewards.
When I have only 3 states (Cool, Warm and Overheated) I don't know how to add "End" state and complete MDP.
I am thinking about having 100 Cool states and 100 Warm states, and for example from Cool1 you can go to Cool2, Warm2 or Overheated and so on.
In this example my values of states close to 0 are higher than states closed to 100.
Am I missing something in MDP?
There should only be 3 possible states. "Cool" and "warm" states are recurrent, and "overheated" state is absorbing because the probability of leaving the state is 0.
You can have two actions, slow or fast, for both"cool" and "warm" states, as described in the problem statement. The probability transition matrix and step rewards can be easily established from the chart. For example, P(go fast, from cool to warm) = 0.5, and R(go fast, from cool to warm) = 2.
Depending on the objective, you can solve it as a finite horizon or infinite horizon MDP.
I want to know how this: https://youtu.be/aFXcQvdAe08 was done.
Any ideas​ how this was created?
In really high level terms, I think it could happen like this:
The entire song is divided up into windows of some fraction of a second.
For each window, a Fourier transform or some equivalent transform the audio signal from the time to frequency domain.
The transformed signal is mirrored radially around the origin, where the "fatness" of each circle could correspond to "how tall the respective hump is" in the frequency graph.
So, to make the animation "smooth" one would probably have at least 10 windows a second. Hope that helps. All images are own work.
i have been trying to modify the amplitude for specific frequencies. Here is what i have done:
I get the data 2048 as float array which have a value range of [-1,1]. It's raw data.
I use this RealFFT algorithm http://www.lomont.org/Software/Misc/FFT/LomontFFT.html
I divide the raw data into left and right channel (this works great).
I perform RealFFT (forward enable) on both left and right and i use this equation to find which index is the right frequency that i want: freq/(samplerate/sizeOfBuffer/2.0)
I modify the frequency that i want.
I perform RealFFT (forward disable) to go back to frequency domain.
Now when i play back, i hear the change tat i did to the frequency but there is a flickering noise ( kinda the same flickering when you play an old vinyl song).
Any idea what i might do wrong?
It was a while ago i took my signal processing course at my university so i might have forgot something.
Thanks in advance!
The comments may be confusing. Here are some clarifications.
The imaginary part is not the phase. The real and imaginary parts form a vector, think of a 2-d plot where real is on the x axis and imaginary on the y. The amplitude of a frequency is the length of the line formed from the origin to the point. So, the phase is the arctan of the real and imaginary parts divided. The magnitude is the square root of the sum of squares of the real and imaginary parts.
So. The first step is that you want to change the magnitude of the vector, you must scale both the real and imaginary parts.
That's easy. The second part is much more complicated. The Fourier transform's "view" of the world is that it is infinitely periodic - that is, it looks like the signal wraps from the end, back to the beginning. If you put a perfect sine tone into your algorithm, and say that the period of the sine tone is 4096 samples. The first sample into the FFT is +1, then the last sample into the FFT is -1. If you look at the spectrum in the FFT, it will appear as if there are lots of high frequencies, which are the harmonics of transforming a signal that has a jump from -1 to 1. The longer and longer the FFT, the closer that the FFT shows you the "real" view of the signal.
Techniques to smooth out the transitions between FFT blocks have been developed, by windowing and overlapping the FFT blocks, so that the transitions between the blocks are not so "discontinuous". A fairly common technique is to use a Hann window and overlap by a factor of 4. That is, for every 2048 samples, you actually do 4 FFTs, and every FFT overlaps the previous block by 1536. The Hann window gets mathy, but basically it has nice properties so that you can do overlaps like this and everything sums up nicely.
I found this pretty fun blog showing exactly the same learning pains that you're going through: http://www.katjaas.nl/FFTwindow/FFTwindow&filtering.html
This technique is different from another commenter who mentions Overlap-Save. This is a a method developed to use FFTs to do FIR filtering. However, designing the FIR filter will typically be done in a mathematical package like Matlab/Octave.
If you use a series of shorter FFTs to modify a longer signal, then one should zero-pad each window so that it uses a longer FFT (longer by the impulse response of the modification's spectrum), and combine the series of longer FFTs by overlap-add or overlap-save. Otherwise, waveform changes that should ripple past the end of each FFT/IFFT modification will , due to circular convolution, ripple around to the beginning of each window, and cause that periodic flickering distortion you hear.
I've been hunting all over the web for material about vocoder or autotune, but haven't got any satisfactory answers. Could someone in a simple way please explain how do you autotune a given sound file using a carrier sound file?
(I'm familiar with ffts, windowing, overlap etc., I just don't get the what do we do when we have the ffts of the carrier and the original sound file which has to be modulated)
EDIT: After looking around a bit more, I finally got to know exactly what I was looking for -- a channel vocoder. The way it works is, it takes two inputs, one a voice signal and the other a musical signal rich in frequency. The musical signal is modulated by the envelope of the voice signal, and the output signal sounds like the voice singing in the musical tone.
Thanks for your help!
Using a phase vocoder to adjust pitch is basically pitch estimation plus interpolation in the frequency domain.
A phase vocoder reconstruction method might resample the frequency spectrum at, potentially, a new FFT bin spacing to shift all the frequencies up or down by some ratio. The phase vocoder algorithm additionally uses information shared between adjacent FFT frames to make sure this interpolation result can create continuous waveforms across frame boundaries. e.g. it adjusts the phases of the interpolation results to make sure that successive sinewave reconstructions are continuous rather than having breaks or discontinuities or phase cancellations between frames.
How much to shift the spectrum up or down is determined by pitch estimation, and calculating the ratio between the estimated pitch of the source and that of the target pitch. Again, phase vocoders use information about any phase differences between FFT frames to help better estimate pitch. This is possible by using more a bit more global information than is available from a single local FFT frame.
Of course, this frequency and phase changing can smear out transient detail and cause various other distortions, so actual phase vocoder products may additionally do all kinds of custom (often proprietary) special case tricks to try and fix some of these problems.
The first step is pitch detection. There are a number of pitch detection algorithms, introduced briefly in wikipedia: http://en.wikipedia.org/wiki/Pitch_detection_algorithm
Pitch detection can be implemented in either frequency domain or time domain. Various techniques in both domains exist with various properties (latency, quality, etc.) In the F domain, it is important to realize that a naive approach is very limiting because of the time/frequency trade-off. You can get around this limitation, but it takes work.
Once you've identified the pitch, you compare it with a desired pitch and determine how much you need to actually pitch shift.
Last step is pitch shifting, which, like pitch detection, can be done in the T or F domain. The "phase vocoder" method other folks mentioned is the F domain method. T domain methods include (in increasing order of quality) OLA, SOLA and PSOLA, some of which you can read about here: http://www.scribd.com/doc/67053489/60/Synchronous-Overlap-and-Add-SOLA
Basically you do an FFT, then in the frequency domain you move the signals to the nearest perfect semitone pitch.
Are there any good (if possible scientific) resources available (web or books) about overlap processing. I am not that interested in the effects of using overlap processing and windows when analyzing a signal, since the requirements are different. It is more about the following Real Time situation: (I am currently dealing with audio signals)
Dividing a signal into smaller parts.
Creating overlap windows.
FFTing the windowed chunks.
Do processing in the frequency domain.
IFFT the results.
put the chunks together to a continuous stream.
I am especially interested in the influence of the window used on the resulting error as well as the effect of the overlap length. However I couldn't find any good resources that deal with the subject in detail. Any suggestions?
Edit:
After some discussions if using a window function is appropriate, I found a decent handout explaining the overlap and add/save method. http://www.ece.tamu.edu/~deepa/ecen448/handouts/08c/10_Overlap_Save_Add_handouts.pdf
However, after doing some tests, I noticed that the windowed version would perform more accurate in most cases than the overlap & add/save method. Could anybody confirm this?
I don't want to jump to any conclusions regarding computation time though....
Edit2:
Here are some graphs from my tests:
I created a signal, which consists of three cosine waves
I used this filter function in the time domain for filtering. (It's symmetric, as it is applied to the whole output of the FFT, which also is symmetric for real input signals)
The output of the IFFT looks like this: It can be seen that low frequencies are attenuated more than frequency in the mid range.
For the overlap add/save and the windowed processing I divided the input signal into 8 chunks of 256 samples. After reassembling them they look like that. (sample 490 - 540)
It can be seen that the overlap add/save processes differ from the windowed version at the point where chunks are put together (sample 511). This is the error which leads to different results when comparing windowed process and overlap add/save. The windowed process is closer to the one processed in one big junk.
However, I have no idea why they are there or if they shouldn't be there at all.
This is fairly well-known area of signal processing, and generally speaking if you are doing processing along the lines of FFT -> spectral processing -> IFFT you need to use the "overlap and add" approach. Cross-correlation of two inputs is a classic example, done much more easily in the spectral domain than the time domain.
Here's a short paper I found right away via Google (I just searched for "fft overlap and add"): http://www.coe.montana.edu/ee/rmaher/ee477/ee477_fftlab_sp07.pdf
I would recommend you invest in a good Signal Processing book, such as the classic Rabiner & Gold "Theory and application of digital signal processing" (Prentice-Hall ISBN 0-13-914101-4). That should cover the concept of overlap-and-add processing.
When using an FFT for overlap-add or overlap-save fast convolution filtering, normally you don't want to use a windowing function. The circular windowing artifacts cancel out when combining successive FFT frames in canonical overlap add/save filtering.
ADDED:
If you do use a non-rectangular window, you might want to make sure that all the overlapped frames of windows sum to DC, otherwise your resulting filtered signal will have amplitude scalloping. Rectangular windows and raised-cosine (von Hann) windows will sum to DC if the overlap amount is an exact submultiple of the window width (except, of course, at the very start and end of the overlap sequence).
I have been playing with this attempting to answer the question for myself as to why one would use a window. My only references to a synthesis window are this:
https://ccrma.stanford.edu/~jos/sasp/Inverse_FFT_Synthesis.html
http://recherche.ircam.fr/anasyn/roebel/amt_audiosignale/VL2.pdf
http://www.dspdimension.com/tutorials/
Stephan Bernsee has some good overview information. His smbpitchshift code uses a synthesis window -- He uses the raised cosine on the input block, then applies it again on the output block, but this I believe is necessary because the pitch shifting algorithm is not a linear filtering operation, so it is certain there may be discontinuous artifacts on the window boundaries, thus a synthesis window is used to create a smooth transition between frames.
I think the reason there is not much information specifically addressing windowing for frequency domain real-time convolution is because it doesn't have a practical application unless you also need to do some analysis (ie, and adaptive filter of some sort), then the topics related to spectral spreading is again of interest.
I have plotted outputs from a filtered signal using both a raised cosine window as well as overlap-add method, and the end result is an identical IR, and identical signals. It comes as no surprise since the same operations performed in the time domain yield the same results.
On the other hand, if I implement a broken filter kernel, a smooth windowing function can help mask artifacts. This in a sense windows the broken IR so there is a more cohesive transition between frames. It would still be better to have an IR that is limited to length nfft/2 in the time domain. If you need to obtain a filter response with an IR longer than nfft/2, then you should consider either using a larger FFT size (if latency is not a problem) or use a partitioned convolution scheme:
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fpcfarina.eng.unipr.it%2FPublic%2FPapers%2F164-Mohonk2001.PDF&ei=qtH0TorDEoKziQKAloHEDg&usg=AFQjCNGDmz79DiuG1kmPXifbWJ7M-gr9rQ&sig2=CMopEcGc1VArZ3gipWTr_w
or
http://www.music.miami.edu/programs/mue/Research/jvandekieft/jvchapter2.htm
I hope that is helpful to somebody reading this
I hope those links help, even though it doesn't directly address windowing as used in real-time Frequency domain filtering.