I'm planning a micro-controller project on active noise cancellation.
The idea is:
Speaker_1 generates 100-200 Hz noise (constant frequency).
Microphone records Speaker_1.
Signal is passed into micro-controller for DSP.
Output from micro-controller is 180 degree phase shift of input.
Output signal goes to Speaker_2.
Sound from Speaker_2 cancels sound from Speaker_1. Room is silent
My questions are:
Is this idea feasible? (I saw demo here: https://www.youtube.com/watch?v=UyN1TACCbHE)
Once the noise-cancellation does start to work, then wouldn't the microphone receive no input? Thus no signal equates to no noise cancellation?
Before you waste too much of your time try this: Take two speakers. Reverse the speaker wires on one to switch the phase. Now play a mono signal through them. You'll find pretty quickly that the room is not silent. There will be some cancellations at some frequencies but that will be highly dependent upon your listening position and the speaker locations.
Related
I'm trying to understand how to implement audio playback from scratch on attiny85. The goal is to play a short sound (cat meows, so i want it to remain recognizable) from an array representing strength of audio signal sampled at fixed interval.
As far as i understand, signal strength is linearly mapped to voltage of analogue audio signal. As far as I know, audio cards are Digital to Analogue Converters, but attiny85 probably doesn't have that.
I'm curious if I can use pwm to play the sound back. Since pwm changes average voltage by changing duty cycle of alternating high and low phases of signal, it most likely would result in the drop of audio quality. Wav sampling rates can differ between 1 HZ and 4.3 GHz according to google. Attiny85 has internal clock with frequency up to 8MHz (which I hope is same for it's pwm generator).
Considering reconfiguring the timer and pwm settings as well as looping in the array, what is the maximum sampling rate of audio i can reliably play? And should i even try to do it with pwm, or there are better options?
Given a system clock of 8 MHz, you can use PWM to generate mono (single-channel) audio.
Consider a PWM period of 1000 clocks, giving you about 10 bit resolution. The sample rate will be 8000 Hz then, which gives you some kind of lo-fi audio.
If you reduce your signal resolution to 8 bits, you'll get 8 MHz / 28 = 31.25 kHz sample rate. This gets near hi-fi.
Synchronize your sample output with the PWM generator, and use an appropriate analogue filter.
Many years ago I built a digital door bell with a sample rate of 8 kHz and 8 bit samples. It played nice sounds in the quality of telephones. The microcontroller was a 8051 derivative and it used an R-2R ladder as DAC.
A simpel sinus can be generated by using a 50% PWM signal and varying the frequency. Given some filtering effect through the speaker, it would mimik a single tone audio signal.
Making more advanced tones (needed for natural sound) quickly gets more complicated and the duty cycle of the signal can also be used to trick the human ear into hearing harmonics. Check out the arduino function tone() for some inspiration.
Be carefull when connecting a small speaker to the Arduino, preferably a transistor/buffer/small amplifyer should be place between the Arduino and the speaker.
I'm making a game in which there are a series of events (which happens, say, every 30 frames in a 60fps setting) that I want to sync with the music (at 120 bpm). In usual cases, e.g. rhythm games, syncing the events to the music is easier, because human seems to perceive much smaller gaps in music than in videos. However, in my case, the game heavily depends on frame-based time, and a lot of things will break if I change the schedule of my series of events.
After a lot of experiments, it seems to me almost impossible to tweak the music without disturbing the human ear: A jump of ~1ms is noticeable, a ~10ms discrepancy between video and audio is noticeable, a 0.5% change in the pitch is noticeable. And I don't have handy tools to speed up audio without changing the pitch.
What is the easiest way out in this circumstance? Is there any reference on this subject that I can refer to? Any advice is appreciated!
The method I that I successfully use (in Java) is to route the playback signal through a path that allows the counting of PCM frames (audio frames run at rates like 44100 fps, as opposed to screen updates which run at rates like 60 fps). I don't know about other languages, but with Java, this can be done by outputting using a SourceDataLine class. As the audio frame count is incremented, it can be compared to the next item (pending item) on a collection of events that require triggers to other systems or threads. Java has an excellent class for handling the collection of events: ConcurrentSkipListSet. It is asynchronous, and automatically sorts elements via a Comparator set to the desired PCM frame count.
Some example code that showing the counting of frames can be seen in this tutorial Using Files and Format Converters, if you search on the page for the phrase "Here, do something useful with the audio data". They are counting bytes, not PCM frames, but the example does give the basic idea.
Why is counting PCM effective? I think this has to do with the fact that this code (in Java) is the closest we get to the point where audio data is fed to the native code controlling the sound system, and that this code employs a blocking queue. Thus, the write operations only happen when the audio system is ready to receive and playback more sound data, and audio systems have to be very accurate in how they maintain their rate of processing. The amount of time variance that occurs here (especially if the thread is given a high priority) is smaller than the time variance incurred by choices made by the JVM as it juggles multiple threads and processes.
I am beginning a project using GNUradio and an inexpensive SDR.
http://www.amazon.com/gp/product/B00SXZDUAQ?psc=1&redirect=true&ref_=oh_aui_search_detailpage
One portion of the project requires me to generate a reference audio tone and compare the phase of that tone to demodulated audio.
To simulate this portion of the system, I have generated a simple GNUradio flowchart:
I had some issues with the source and demodulated audio in that they would drift relative to each other. This occurred on the scope sync on the original flowgraph. To aid in troubleshooting I sent the demodulated audio out thru the soundcard’s second channel and monitored both audio streams in addition to the modulated RF on an external oscilloscope:
Initially all seems well but, the demodulated audio drifts in relation to the original source and RF:
My question is: am I doing something wrong in the flowgraph or am I expecting too much performance out of an inexpensive SDR?
Thanks in advance for any insights
You cannot expect to see zero phase drift in anything short of a fully digital simulation, or a fully analog circuit with exactly one oscillator, because no two (physical) oscillators have identical frequencies.
In your case, there are two relevant oscillators involved:
The sample clock in the RTL-SDR unit.
The sample clock in your sound card output.
Within an GNU Radio flowgraph, there is no time reference per se and everything depends on the sources and sinks which are connected to hardware.
The relevant source in your flowgraph is the RTL-SDR hardware; insofar as its oscillator is different from its nominal value (28.8 MHz, as it happens), everything it produces will be off-frequency in an absolute sense (both RF carrier frequencies and audio frequencies of demodulated output).
But you don't actually have an absolute frequency reference; you have the tone produced by your sound card. The sound card has its own oscillator, which determines the rate at which samples are converted to analog signals, and therefore the rate at which samples are consumed from the flowgraph.
Therefore, your reference signal will drift relative to your received and demodulated signal, at a rate determined by the difference in frequency error between the two oscillators.
Additionally, since your sound card will be accepting samples from the flowgraph at a slightly different real-time rate than the RTL-SDR is producing them, you will notice periodic glitches in the audio as the error accumulates and must be dealt with; they will start occurring either immediately (if the source is slower than the sink, requiring the sound card to play silence instead) or after a delay for buffers to hit their maximum size (if the source is faster than the sink, requiring the RTL-SDR to drop some samples).
I have a program written in C++ that uses RtAudio ( Directsound ) to capture and playback audio at 48kHz samplerate.
The input capture uses a callback option. The callback writes data to a ringbuffer.
The output is a blocking write function in a separate thread that reads from the ringbuffer.
If the input and output devices are the same the audio loops thru perfectly.
Now I want to get audio from device 1 and playback on device 2. Each device has its own sampleclock set to 48kHz but are not in sync. After a couple of seconds the input and output are out of sync.
Is it possible to sync two independent oudio devices?
There are two challenges you face:
getting the two devices to start at the same time.
getting the two devices to stay in sync.
Both of these tasks are difficult. In the pro audio world, #2 is accomplished with special hardware to sync the word-clocks of multiple devices. It can also be done with a high quality video signal. I believe it can also be done with firewire devices, but I'm not sure how that works. In practice, I have used devices with no sync ("wild") and gotten very reasonable sync for up to an hour or two. Depending on what you are trying to do, the sync should not drift more than a few milliseconds over the course of a few minutes. If it does, you can consider your hardware broken (of course, cheap hardware is often broken).
As for #1, I'm not sure this is possible in any reliable sense with directsound. To the extent that it's possible with any audio API, it is difficult at best: both cards have streams that require some time to setup, open and start playing. In general, the solution is to use an API where this time is super low (ASIO, for example). This works reasonably well for applications like video, but I don't know if it really solves the problem in general.
If you really need to solve this problem, you could open both cards, starting to play silence, and use the timing information generated by the cards to establish the delay between putting data into the card and its eventual playback (this will be different for each card and probably each time you run) and use that data to calculate when to start actual playback. I don't know if RTAudio supplies the necessary timing information, but PortAudio does. This document may help.
I am writing music for an emulated system (Chip16), which can output ADSR formatted sound to a single channel.
Furthermore, it can only play one sound at any given time, cutting off a playing sound if necessary.
If I wanted a beat or bass playing "behind" the melody, how would I go about doing that?
Are there any tricks to simulate polyphony?
I am aware of how it was done on IBM PC speakers -- but that relied on the physical/mechanical nature of the device, which is not possible here.
For reference, the available sound instructions:
sng 0xAD, 0xVTSR ; load Attack,Decay,Volume,Type,Sustain,Release params
snp rx, D ; play sound, with frequency at [rx], for D milliseconds
snd0 ; stop currently playing sound
Thanks!