I am writing music for an emulated system (Chip16), which can output ADSR formatted sound to a single channel.
Furthermore, it can only play one sound at any given time, cutting off a playing sound if necessary.
If I wanted a beat or bass playing "behind" the melody, how would I go about doing that?
Are there any tricks to simulate polyphony?
I am aware of how it was done on IBM PC speakers -- but that relied on the physical/mechanical nature of the device, which is not possible here.
For reference, the available sound instructions:
sng 0xAD, 0xVTSR ; load Attack,Decay,Volume,Type,Sustain,Release params
snp rx, D ; play sound, with frequency at [rx], for D milliseconds
snd0 ; stop currently playing sound
Thanks!
Related
I'm planning a micro-controller project on active noise cancellation.
The idea is:
Speaker_1 generates 100-200 Hz noise (constant frequency).
Microphone records Speaker_1.
Signal is passed into micro-controller for DSP.
Output from micro-controller is 180 degree phase shift of input.
Output signal goes to Speaker_2.
Sound from Speaker_2 cancels sound from Speaker_1. Room is silent
My questions are:
Is this idea feasible? (I saw demo here: https://www.youtube.com/watch?v=UyN1TACCbHE)
Once the noise-cancellation does start to work, then wouldn't the microphone receive no input? Thus no signal equates to no noise cancellation?
Before you waste too much of your time try this: Take two speakers. Reverse the speaker wires on one to switch the phase. Now play a mono signal through them. You'll find pretty quickly that the room is not silent. There will be some cancellations at some frequencies but that will be highly dependent upon your listening position and the speaker locations.
I am bit stuck, how can I make my arduino record into .wav files?
The arduino is connected with a microphone, and am using the Arduino ADC.
Any ideas? Will I be able to play them back using my pc?
many question cross my head
1- Is this possible using an arduino Uno
2- Is this possile using just a microphone connected to the Arduino ADC
3- if yes how can i get the wav format.
The idea gonna be like this
Ardiuno microphone-->Uno ADC -->arduino (library making wav sound)--> Storing data to a an SD card connected via SPI or maybe (connecting a Raspberry as a storage device)
also another question:
4- Do I need an amplifier due to the act that analog output from the microphone is very weak so the ADC couldn't detect the variation
In another log i had seen that i should connect the microphone to a level shifter.And that cause of the analog output is AC so i have to make the negative wave as 0 (for 10 it ADC)
the zero point as 512 and the positive as 1024 (10 bit ADC).(really i'm not sure about this part)
doing some research i got this library "https://github.com/TMRh20/TMRpcm/wiki/Advanced-Features#recording-audio" which is supposed to do the job, I mean making some wav file from the analog input.
So any help would be appreciated
Thx in advance,
Salah Laaroussi
Yes, although a bit complex it is very possible to do this via an uno.
The biggest hurdles to overcome is the limited amount of RAM and the clock speed. You will have to setup twin buffers to handle writing to the SD card. Make sure the card has a high enough write speed or the entire program will come to a screeching halt as you will run out of memory.
apc mag has a great article detailing out the circuit and code.
http://apcmag.com/arduino-projects-digital-audio-recorder.htm/
There are many things you haven't prepared yet:
output of microphone (assuming you know about electronics: still requires a biasing circuit e.g. a resistor + capacitor).
the output of the microphone is still very weak (in the magnitude of mV), which Arduino is incapable of capturing so you need a pre-amplifier
the design of the pre-amplifier will also include DC offset which makes the output of the microphone all above 0VDC which is in the range of the Arduino ADC otherwise the arduino will capture only those above 0VDC.
I have a program written in C++ that uses RtAudio ( Directsound ) to capture and playback audio at 48kHz samplerate.
The input capture uses a callback option. The callback writes data to a ringbuffer.
The output is a blocking write function in a separate thread that reads from the ringbuffer.
If the input and output devices are the same the audio loops thru perfectly.
Now I want to get audio from device 1 and playback on device 2. Each device has its own sampleclock set to 48kHz but are not in sync. After a couple of seconds the input and output are out of sync.
Is it possible to sync two independent oudio devices?
There are two challenges you face:
getting the two devices to start at the same time.
getting the two devices to stay in sync.
Both of these tasks are difficult. In the pro audio world, #2 is accomplished with special hardware to sync the word-clocks of multiple devices. It can also be done with a high quality video signal. I believe it can also be done with firewire devices, but I'm not sure how that works. In practice, I have used devices with no sync ("wild") and gotten very reasonable sync for up to an hour or two. Depending on what you are trying to do, the sync should not drift more than a few milliseconds over the course of a few minutes. If it does, you can consider your hardware broken (of course, cheap hardware is often broken).
As for #1, I'm not sure this is possible in any reliable sense with directsound. To the extent that it's possible with any audio API, it is difficult at best: both cards have streams that require some time to setup, open and start playing. In general, the solution is to use an API where this time is super low (ASIO, for example). This works reasonably well for applications like video, but I don't know if it really solves the problem in general.
If you really need to solve this problem, you could open both cards, starting to play silence, and use the timing information generated by the cards to establish the delay between putting data into the card and its eventual playback (this will be different for each card and probably each time you run) and use that data to calculate when to start actual playback. I don't know if RTAudio supplies the necessary timing information, but PortAudio does. This document may help.
I am trying to record what it is just playing out to the speaker using following ALSA APIs:
snd_pcm_mmap_writei()
snd_pcm_mmap_readi()
Both functions are called one to next in the same thread. The writei() function returns quickly (I believe it returns once playback buffer is available), while the readi() returns until designated samples are captured. But the samples captured are not what is has just played out. I am guessing that ALSA is not in a duplex mode, i.e., it has to finish playback first, then start to record, which records nothing meaningful but just clicks. The speaker still plays out the sound correctly.
All HW/SW parameters are setup correctly. If I do audio capture only, I will get a good sound wave.
The PCM handles are opened with normal mode (not non-block, not async).
Anybody has suggestions how to make this work?
You do not need to use the mmap functions; the normal writei/readi calls suffice.
To handle two PCM streams at the same time, run them in separate threads, or use non-blocking mode so that the same event loop can handle both devices.
You need to fill the playback buffer before the data is played, and capture data can be read only after the capture buffer has been filled, so the overall latency is the playback buffer size plus the capture period size plus any hardware delays and sound propagation delays.
I have a DirectShow application that I built with Delphi 6 using the DSPACK component library. For two days I have been trying to solve a problem with audio playback. When I run the filter graph I create I hear repetitive clicks in the playback. What was really confusing was that the audio file I created simultaneously with my filter graph had clean continuous audio, not gaps. So I knew that the audio buffers were being delivered properly but something I was doing was "jamming up" the "live" playback. Or so I thought. I spent two days diagnosing the problem looking for semaphores being held too long (locks) or perhaps timestamp problems, which I documented in this other Stack Overflow post:
Getting stuttering during rendering of my DirectShow filter despite output file being "smooth"
A few minutes ago I decided to try a test with the Graph Edit utility. I created a dead simple graph consisting of just the capture device I was using (VOIP phone microphone), and the renderer device I was using (HD ATI Rear Audio output to headphones). Two filters total. Much to my surprise I heard the same clicking. So here was a case that did not involve my code at all and I heard clicking.
Then I changed the audio renderer in the Graph Edit created filter graph to the VOIP phone ear piece. The clicking went away.
Now I know there's a way to get smooth audio on ut the ATI Rear Audio device since its the preferred audio output device and everything from videos I play on my PC to wave files I play on it sound flawless. So are the other software programs doing something different than just connecting filters? I am wondering if perhaps the default mode for the HD ATI Rear Audio is without double-buffering and perhaps those other software programs know how to enable that feature? Or are they doing something else, perhaps using another DirectShow or DirectSound filter or technique for example, to make the audio play smoothly on the HD ATI Rear Audio renderer?
What you possibly having (depends on actual stuttering though) is that when you are using capture and playback devices backed by different hardware, their sampling rates slightly differ. For example, you capture 22050 Hz at actual rate of (22050 - 2%) Hz and you play it back with hardware consuming bytes at (22050 + 2%) Hz.
Now obviously this won't work out smooth: eventually playback will experience data underlow... If you save into file and play back from file, it will go smooth as the file will be able to supply data at the rate of playback device. If capture and playback devices are the same hardware, they are likely to use shared "hardware" clock and rates match.
The problem is known as "rate matcing" and is discussed on MSDN in Live Sources section.