I have a 2 AVMutableCompositionTrack objects. 1st contains video (with his sound) and 2nd contains some audio, for example mp3. I want to play both of them simultaneously. Everything is OK with the sound of second object. Unfortunately I can't hear the sound of video. Why ?
I can play above mentioned track in the different players. In this case everything is OK. But it is needed to play both items in the same player by using AVMutableCompositionTrack mechanizm (for exporting feature).
Related
Basically Microphone have USB port only.
Camera have micro USB looks like.
Are there is any way to record video and audio from those devices together?
Only one option that i see is capture separately one from another and after add sound on top of it.
But i afraid that it will be hard to fit video + voice to be look perfectly.
Any one have a solution for this? Or does anyone knows App which will help to combine them or edit video and adding audio on top of it?
Hand sync the video and audio.
Essentially, start the video, then turn the mic on. Clap a few times so you know where the two start when you go back to edit it.
I am working on a film analysis program, which retrieves data in realtime from a movie, that is playing in the same sketch. For analysing the sound I tried the minim library, but I can't figure out, how to get the audio signal from the movie. All I could do was accessing an audio file, I was loading into the sketch manually, or the line-in through the mic.
Thanks a lot!
Although GStreamer (used by the processing-video library) has access to audio, the processing-video library itself doesn't expose that at the moment.
You will need to use workarounds at the moment:
Extract audio from your movie and load straight into minim. (you can trigger audio playback at the same time as movie playback if you need to)
or use a tool to use system audio output as an input (minim get line in). On OSX you can use Soundflower. Another option is JACK and it's patch interface.
I'm in a middle of trying to buy iptv device and of course different iptv devices supports different kind of file formats, video codecs and audio codecs.
Can someone recommend me a collection of videos which would be encoded using different versions and different video and audio codec - as much as possible different combinations.
I understand that supporting everything (all video and all audio codecs) is pretty much impossible - so it would be good if they are sorted in most used - least used order. For example:
.avi - xvid vx.xx video codec + yyy audio codec
.mkv - ....
YouTube .flv format ...
...
But of course which codec is used and where depends of which movies you get and from where. I could perform ordering of videos on my own.
Preferably so that videos would be as small as possible - for example 20 seconds per clip, and some video / audio which you can easily inspect - understandable video / audio. (language does not matter)
I suspect also that this kind of collection does not exists - then it's ok to give me video clips for different codecs here, and I will collect them into one collection.
Eventually I want to place all these clips on usb stick - come to shop and try out which of clips can be played and on which iptv-device.
Two collections of video test files are on kodi.tv site : https://kodi.tv/media-samples/ (archived link - right click + save to download files) and http://kodi.wiki/view/Samples
Another one is on the site of MPlayer: http://samples.mplayerhq.hu/
I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.
The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.