I need to play .wav files on top of each other and at different times for an app that can return beats with drum samples there a class or method to implement this?
All I have been able to do so far is play wav files in sequence.
Think you can do this by loading each sample in a SoundEffect class and calling the Play method depending on your timing.
http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.audio.soundeffect.aspx
The other way is to use a MediaStreamSource and mix the raw wav data together before feeding it to the GetSampleAsync method.
http://msdn.microsoft.com/en-us/library/system.windows.media.mediastreamsource(v=vs.95).aspx
Related
I am using moviepy to generate MP4 files from sets of shorter clips, each with their own audio. The problem is that the resulting MP4 often has a very high dynamic range from one clip to the next and I would like to apply audio compression to make it easier on the ears. In Google I can only find results about audio information compression, but not about audio compression from the audio engineering perspective.
I would like to know if there is some way of doing this with moviepy, or with some other library. I have no issue with invoking (non interactive) command line utilities either.
Thank you.
I am pretty new with processing audio file. '
I want to build a web app that can take audio file and turn the into visualization for user like this https://github.com/CrowdCurio/audio-annotator
Right now I want to research on visualize audio datas. Original data that was stored in S3 come in two form .ts and .flac. That's why I want to ask if there's any visualization tool which can directly use .ts or .flac audio file.
Because right now the solution I think of will be first convert them into .wav or .mp3, so most visualization tool can process them, but .wav file is really storage-wasting as far as I know.
So if you know any approach or tool to do this. Please let me know!
Audio visualization requires audio data. Your compressed audio isn't audible until decoded. Therefore, you must decode them to PCM before visualizing.
This doesn't require that you store the files as WAV, but you'll at least have to decode them on-the-fly.
I am working on a film analysis program, which retrieves data in realtime from a movie, that is playing in the same sketch. For analysing the sound I tried the minim library, but I can't figure out, how to get the audio signal from the movie. All I could do was accessing an audio file, I was loading into the sketch manually, or the line-in through the mic.
Thanks a lot!
Although GStreamer (used by the processing-video library) has access to audio, the processing-video library itself doesn't expose that at the moment.
You will need to use workarounds at the moment:
Extract audio from your movie and load straight into minim. (you can trigger audio playback at the same time as movie playback if you need to)
or use a tool to use system audio output as an input (minim get line in). On OSX you can use Soundflower. Another option is JACK and it's patch interface.
I'm wondering if it's possible to draw an audio channel of a video or audio file as an image using ffmpeg, or if there's another tool that would do it on Win2k8 x64. I'm doing this as part of an encoding process after a user uploads a video or audio file.
I'm using ColdFusion 10 to handle the upload and calling cfexecute to run ffmpeg.
I need the image to look something like this (without the horizontal lines):
You can do this programmatically very easily.
Study the basics of FFmpeg. I suggest you to compile this sample. It explains how to open a video/audio, identify the streams and loop over the packets.
Once you have the data packet (in this case you are interested only in the audio packets). You will decode it (line 87 of this document) and obtain the raw data of an audio. It's the waveform itself (the analogue "bitmap" for an audio).
You could also study this sample. This second example is how to write a video/audio file. You don't want to write any video, but with this sample you can easily understand how the audio raw data packet works, if you see the functions get_audio_frame() and write_audio_frame().
You need to have some knowledge about creating a bitmap. Any platform has an easy way to do that.
So, the answer for you: YES, IT IS POSSIBLE TO DO THIS WITH FFMPEG! But you have to code a little bit in order to get what you want...
UPDATE:
Sorry, there are ALSO built-in features for this:
You could use those filters... or
showspectrum, showwaves, avectorscope
Here are some examples on how to use it: FFmpeg Filters - 12.22 showwaves.
The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.