I am making a sound engine where I can play and stop sound. My issue is if a user wants to stop the sound I immediately stop it ie I send 0 as PCM value. This has the consequence of producing a pop / click sound because the PCM value drops from lets say 0.7 to 0 immediately causing a pop/click sound which is very annoying to hear.
Here is a discussion about this.
I am looking for an algorithm or a way to deal with these audio clicks / pops. What is the best practice for dealing audio clicks? Is there a universal way to go about this? I am very new to audio DSP and I could not find a good answer for this.
When you cut off the sound abruptly, you are multiplying it by a step-shaped signal.
When you multiply two signals together, you convolve their frequencies. A step-shape has energy at all frequencies, so the multiplication will spread the energy from the sound over all frequencies, making an audible pop.
Instead, you want to fade the sound out over 30ms or so -- that is still very fast, and will sound like an abrupt stop, but there will be no audible pop.
You should use a curve shaped like 1-t2 to modulate the volume, or something else without significant high-frequency components. That way, when it is convolved with the original sound in the frequency domain, it won't produce any new frequencies.
Related
I'm trying to create a quick video in After Effects on audible frequencies.
I'm using this audio clip (20 seconds). The clip starts from 20khz and goes down to 20hz.
As you can see (or hear), we can't hear anything in the beginning at 20khz. Most of us start hearing the frequency at 16khz-15khz.
But when I import this audio clip into AE, it sounds completely different. The sound starts playing from the beginning and it's very loud and sounds nothing like the clip I downloaded.
Here's how the audio clip sounds after export: https://www.mboxdrive.com/soundonly.mp3
What's going on here and how do I fix it?
Two things here.
First off, the sound is exhibiting "aliasing" artifacts. This happens when there it pitch content to the sound that is higher in frequency than the Nyquist frequency for the given sample rate. So, either you are using a low sample rate, or the pitch you are generating has harmonics that are causing the aliasing.
Check to make sure you are using a sine wave (if done correctly there should be no additional harmonic content to the tone besides the fundamental pitch), and that your sample rate is above 40K in order to play a 20K sound without aliasing. The most common sampling rate these days is 44100 fps, but you may be using 8000 fps, which is also still employed and would not work for your application.
Second point, you probably want to change the rate at which you travel through the range of pitches. It sounds like you are going along linearly, but the ear hears things exponentially. The difference in pitch from 100 to 200, for example, is the same (in terms of our perception) as the difference from 1000 to 2000. So you might want to make your rate of descent reflect this if the goal is to spend equal time at every perceived pitch level.
I have created a really basic FFT visualizer using a Teensy microcontroller, a display panel, and a pair of headphone jacks. I used kosme's FFT library for Arduino: https://github.com/kosme/arduinoFFT
Analog audio flows into the headphone input and to a junction where the microcontroller samples it. That junction is also connected to an audio out jack so that audio can be passed to some speakers.
This is all fine and good, but currently I'm only sampling the left audio channel. Any time music is stereo separated, the visualization cannot account for any sound on the right channel. I want to rectify this but I'm not sure whether I should start with hardware or software.
Is there a circuit I should build to mix the left and right audio channels? I figure I could do something like so:
But I'm pretty sure that my schematic is misguided. I included bias voltage to try and DC couple the audio signal so that it will properly ride over the diodes. Making sure that the output matches the input is important to me though.
Or maybe should this best be approached in software? Should I instead just be sampling both channels separately and then doing some math to combine them?
Combining the stereo channels of one end of the fork without combining the other two is very difficult. Working in software is much easier.
If you take two sets of samples, you've doubled the amount of math that the microcontroller needs to do.
But if you take readings from both pins and divide them by two, you can add them together and have one set of samples which represents the 'mono' signal.
Keep in mind that human ears have an uneven response to sound volumes, so a 'medium' volume reading on both pins, summed and halved, will result in a 'lower-medium' value. It's better to divide by 1.5 or 1.75 if you can spare the cycles for more complicated division.
I am developing app to detect the inability of elderly people to unlock their rooms using IC cards in their daycare center.
This room doors has an electronic circuit that emits beep sounds d to signal the user failure in unlock the room. My goal is to detect this beep signal.
I have searched a lot and found some possibilities:
To clip the beep sound and use as a template signal and compare it with test signal (the complete human door interaction audio clip) using convolution, matched filters, DTW or what so ever to measure their similarity. What do u recommend and how to implement it.
To analyze the FFT of beep sound to see if it has a frequency band different that of the background noise. I do not understand how to do it exactly?
To check whether the beep sound form a peak at certain frequency spectrum that is absent in the background noise. If so, Implement a freclipped the beep sound and got the spectrogram as shown in the figure spectrogram of beep sound. but i cannot interpret it? could u give me a detailed explanation of the spectrogram.
3.What is your recommendation? If you have other efficient method for beep detection, please explain.
There is no need to calculate the full spectrum. If you know the frequency of the beep, you can just do a single point DFT and continuously check the level at that frequency. If you detect a rising and falling edge within a given interval it must be the beep sound.
You might want to have a look at the Goertzel Algorithm. It is an algorithm for continuous single point DFT calculation.
I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!
Folks,
I am struggling with a simple concept related to the duration of play of PCM data. I would appreciate your feedback.
The application I am developing plays guitar notes from a music sheet.
I have implemented Jaffe-Smith Algorithm for guitar plucking.
https://ccrma.stanford.edu/~jos/Mohonk05/Extended_Karplus_Strong_EKS_Algorithm.html.
Let's say I compute samples for note A (440 Hz) for one second.
At the sample rate of 11025, I will be storing 11025 samples that can be send to the computer speakers as PCM audio.
For all the unique notes on the guitar, it takes quite some time to compute samples for all the notes. I am thinking I will pre-compute and save them as binary data and simply load them when the application is run.
So far so good.
Now, let's say I want to play a song (a list of various notes). Let's say the song needs to be played at 100 beats per minute. Let's say I have to play note A for one beat or 0.6 seconds (60/100).
Recalculating samples for 0.6 seconds may take quite some time.
Can I simply play (11025 * 0.6) samples? Will this create any side effect?
Is there a better way to achieve what I am trying to do?
Thank you in advance for your help.
Regards,
Peter
What you're basically trying to do is create a synthesized guitar, yes? I might suggest that you go with the sampler route instead.
By sample, I mean a small clip of audio (not a single sample in the sense of ADC or DAC).
Basically, you can flatten what you need into 4 parts:
Attack
Decay
Sustain
Release
These four parts work in that order, and are generally referred to as an ADSR envelope. The attack of the note is the initial sound. For a guitar, you are going to hear a pluck and the start of a pitch. The decay is going to be the sample of the string as it starts to fade away. The sustain is a sample repeated over and over again until you release the key. The release sample is what is played when you release the key. For a guitar, you might hear a sample of lightly putting fingers back on the string to stop their vibration.
Now, you could generate all of these samples in real-time, but will likely be very CPU intensive.
Regarding your question: "Can I simply play (11025 * 0.6) samples?" Yes, at a sample rate of 11025, that will be 0.6 seconds of audio. Also keep in mind though that you should be sending a continuous stream of data to the sound card, filling any empty spots with 0 (for signed PCM).