Can effects like fading, distortion, swing, reverb and sustain be added to the music extracted from a midi file on the computer? Or does midi allow us to incorporate these features during it's creation?
Going through the midi specification I couldn't find a possible way to do the above things. Am I correct or am I missing something?
Any suggestions?
MIDI files do not contain sounds; they are instructions for a synthesizer how to generate sounds.
The MIDI specification itself does not define any effects.
The General MIDI specification defines controller numbers for volume, reverb, and sustain effects; many synthesizers are General MIDI compatible.
While some synthesizers have distortion effects, there is no widely accepted standard.
You can simply add or modify controller change messages in the MIDI file. (Whether and how they work depends on your synthesizer.)
"Swing" is based on the timing of the notes. You'd have to change the timestamps of the note-on and note-off messages to affect this.
Related
I work in the field of phonetics and often need to record human speech for acoustic analysis. I have two questions that I couldn't find answers:
If I record in stereo channels, I need to convert to mono later on to proceed with annotation. So in principle mono signal is good enough. Are there reasons that stereo sound should be used (e.g. the signal would be better?)
Also, we were warned that the gain level should be kept small so that the recording level shouldn't exceed the maximum, which leads to signal cuttoff. However, I was also criticised when the recording file shows too low an amplitude (it's still very clear though), for that leads to a low SNR. How do people choose an appropriate gain level?
As the act of recording is involved, the Sound Design forum might be your best bet.
I can't think anything that might be gained, in terms of frequency analysis, by having a stereo signal. Stereo is more about locating the source of a sound in 3D space. Does the source of sound emit different frequency profiles in different directions? Does the environment filter the sound differently over the course of the two paths to the stereo inputs? If the the answer is "not significantly" then mono should be fine.
Choosing an appropriate gain level is mostly a matter of knowing your equipment. Ideally, your recording setup will provide feedback (usually a visual meter of some sort) that shows the signal strength. The "best" would be (theoretically) the loudest level that does not distort. So you have to know at what level distortion happens on all the elements of the recording chain.
There can be some fudging on this, given that the loudest peak on a recorded segment may be an outlier.
I have recorded an audio.
I dont know how it happened that only one sided speech is recorded and the other speech is recorded with a very low sound.
Is there any solution to amplify the other side signal.
any help would be much appreciated.
This question is probably more appropriately asked at a forum where recording and mixing is discussed. For example: https://sound.stackexchange.com/
The ideal would be to improve your recording situation, to control factors so the sound are more closely matched. (Match microphones, isolate the speakers from environmental sounds, optimize input levels, etc.)
After that, the next option or step is to pre-process your audio files with a tool like Audacity. Use this or another DAW (Digital Audio Workstation) tool to match amplitudes or employ noise filtering or a range of other tools.
Audio processing is both tricky (an "art") and cpu intensive, so it's good to get as much of this handled as possible before the sounds are imported into a program.
I've just finished an assessment on Data Structures where one of the questions given was regarding MIDI files. In the descriptor (a criteria for assessors) it states the following:
"Candidates will need to demonstrate their knowledge... by showing they can describe:
At least two Standard File types for images, sounds, video or compression."
Does a MIDI file fall under any of these categories? From what I have searched online, it doesn't. So I am somewhat bemused at a question regarding MIDI files being in this assessment.
Cheers
Not really, no. MIDI tells a synthesizer how to make sounds, where what you can make is constrained by the synthesizer. It cannot be used to represent arbitrary sound data. Though it has been used in a way not originally intended to send small sound samples to a synthesizer.
Compressed sound formats are an entirely different thing from MIDI.
(By the way, the quoted question needs some commas to make sense.)
When I send a MIDI file to my synthesizer, I expect it to produce sounds.
I'm trying to make a video tutorial, so i decided to record the speeches using a TTS online service.
I use Audacity to capture the sound, and the sound was clear !
After dinning, i wanted to finish the last speeches, but the sound wasn't the same anymore, there is a background noise(parasite) which is disturbing, i removed it with Audacity, but despite this, the voice isn't the same ...
You can see here the difference between the soundtrack of the same speech before and after the occurrence of the problem.
The codec used by the stereo mix peripheral is "IDT High Definition Codec".
Thank you.
Perhaps some cable or plug got loose? Do check for this!
If you are using really cheap gear (built-in soundcard and the likes) it might very well also be a problem of electrical interference, anything from ...
Switching on some device emitting a electro magnetic field (e.g. another monitor close by)
Repositioning electrical devices on your desk
Changes in CPU load on your computer (yes i'm serious!)
... could very well cause some kinds of noises with low-fi sound hardware.
Generally, if you need help on audio sounding wrong make sure that you provide a way to LISTEN to the files, not just a visual representation.
Also in your posted waveform graphics i can see that the latter signal is more compressed, which may point to some kind of automated levelling going on somewhere in the audio chain.
I'm wondering what is the recommended audio library to use?
I'm attempting to make a small program that will aid in tuning instruments. (Piano, Guitar, etc.). I've read about ALSA & Marsyas audio libraries.
I'm thinking the idea is to sample data from microphone, do analysis on chunks of 5-10ms (from what I've read). Then perform a FFT to figure out which frequency contains the largest peak.
This guide should help. Don't use ALSA for your application. Use a higher level API. If you decide you'd like to use JACK, http://jackaudio.org/applications has three instrument tuners you can use as example code.
Marsyas would be a great choice for doing this, it's built for exactly this kind of task.
For tuning an instrument, what you need to do is to have an algorithm that estimates the fundamental
frequency (F0) of a sound. There are a number of algorithms to do this, one of the newest and best
is the YIN algorithm, which was developed by Alain de Cheveigne. I recently added the YIN algorithm
to Marsyas, and using it is dead simple.
Here's the basic code that you would use in Marsyas:
MarSystemManager mng;
// A series to contain everything
MarSystem* net = mng.create("Series", "series");
// Process the data from the SoundFileSource with AubioYin
net->addMarSystem(mng.create("SoundFileSource", "src"));
net->addMarSystem(mng.create("ShiftInput", "si"));
net->addMarSystem(mng.create("AubioYin", "yin"));
net->updctrl("SoundFileSource/src/mrs_string/filename",inAudioFileName);
while (net->getctrl("SoundFileSource/src/mrs_bool/notEmpty")->to<mrs_bool>()) {
net->tick();
realvec r = net->getctrl("mrs_realvec/processedData")->to<mrs_realvec>();
cout << r(0,0) << endl;
}
This code first creates a Series object that we will add components to. In a Series, each of the components
receives the output of the previous MarSystem in serial. We then add a SoundFileSource, which you can feed
in a .wav or .mp3 file into. We then add the ShiftInput object which outputs overlapping chunks of audio, which
are then fed into the AubioYin object, which estimates the fundamental frequency of that chunk of audio.
We then tell the SoundFileSource that we want to read the file inAudioFileName.
The while statement then loops until the SoundFileSource runs out of data. Inside the while
loop, we take the data that the network has processed and output the (0,0) element, which is the
fundamental frequency estimate.
This is even easier when you use the Python bindings for Marsyas.
http://clam-project.org/
CLAM is a full-fledged software framework for research and application development in the Audio and Music Domain. It offers a conceptual model as well as tools for the analysis, synthesis and processing of audio signals.
They have a great API, nice GUI and a few finished apps where you can see everything.
ALSA is sort of the default standard for linux now by virtue of the kernel drivers being included in the kernel and OSS being depreciated. However there are alternatives to ALSA userspace, like jack, which seems to be aimed at low-latency professional type applications. It's API seems to have a nicer API, although I've not used it, my brief exposure to the ALSA API would make me think that almost anything would be better.
Audacity includes a frequency plot feature and has built-in FFT filters.