I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.
Related
I am trying to encode 2 videos side by side, sync'd by the audio of the 2 clips. I can successfully encode the 2 videos side by side and select the audio from one of the input streams. However the system we are using to record the 2 videos does not start and stop the recording at the same time (could be up to a second different between cameras). Basically we are using a CCTV system to capture what's going on in a room from multiple angles. We export the 2 cameras between 2 timestamps and due to the way the system records the videos the start of the 2 clips are not the same point in time.
e.g. Export videos between 09:00:00:000 and 09:10:00:000
Video 1 - exports from 08:59:59:123 to 09:10:00:123
Video 2 - exports from 08:59:59:789 to 09:10:00:789
Therefore when video 1 and video 2 are stitched together side by side, they are out of sync by 666ms (which is very noticeable in the encoded video)
Both input streams have (near) identical audio and are both in the exact same format. We are currently placing these videos into Premiere Pro and syncing these videos by the audio and exporting them side by side, however we have a project where we need to do a lot of these in quick succession and this is not really an option. We need to look at scripting this.
Does anyone know if FFMPEG can do this? Or anything else?
Any info would be greatly appreciated.
You can use audio-offset-finder in bash file to calculate offset, cut of the head from one of the video, stitch them together ( like stated here ).
You would need to extract audio streams into separate files and use finder to calculate offset.
offset=`audio-offset-finder --find-offset-of file1.wav --within file2.wav`
Basically Microphone have USB port only.
Camera have micro USB looks like.
Are there is any way to record video and audio from those devices together?
Only one option that i see is capture separately one from another and after add sound on top of it.
But i afraid that it will be hard to fit video + voice to be look perfectly.
Any one have a solution for this? Or does anyone knows App which will help to combine them or edit video and adding audio on top of it?
Hand sync the video and audio.
Essentially, start the video, then turn the mic on. Clap a few times so you know where the two start when you go back to edit it.
I'm making a player for Linux and I want to know the audio channel layout (stereo, 5.1ch, etc) of user's system (not channels included in media file).
For now, it's set by user but I want to implement an auto-detection of channel layout.
Is there any (de-facto) standard method to accomplish this?
If not, can I find a solution for ALSA at least?
In ALSA, the default device typically supports only stereo.
You can try to open a device named front, surround40, surround51, or surround71, but these devices do not have automatic sample format conversion or software mixing.
The best idea would be to use PulseAudio, and to ask the server for the channel map of the sink.
I would like to hook up several piezos to an arduino so that, when they are activated each piezo plays/triggers a separate tone. For instance, I'll have five piezos connected to the arduino - when I apply pressure to each one they play a separate note, either through a software interface on a computer or from the piezos themselves. Basically an Arduino synth using piezos as keys.
I'm just not quite sure how to go about doing this. I'm sure its possible but just need a push in the right direction. Any ideas? Thanks!
The practical difficulty of using one device as both an input sensor and output device, is that once activated to output (a sound) you would have to disable using it as input for some fixed time. Something more responsive would be to use separate sensors for the keys, and just one speaker for all sounds. The good folks who came up the Arduino tutorials have a 3 key sensor player example here:
http://arduino.cc/en/Tutorial/Tone3
and another example of using a piezo as a sound sensor here:
http://www.arduino.cc/en/Tutorial/KnockSensor
I can Help you with the Software interface , You can use your smart phone to play sounds for each Piezo Sensor.
See this app : https://play.google.com/store/apps/details?id=ram.mere.DoDuino
You can connect arduino using Serial ( Android 3.1 and higher ) or Bluetooth to this app.
And to use the Sound Action follow this tutorial :
https://www.youtube.com/watch?v=RQhx6qBElVk
. So you specify what sound to be played on your android phone , and when you detect which piezo you send data to the android and then the Sound Specified will be played .
So for example if android App Received : #p1; then it will played the sound related to Piezo one
and when you send #s1; then it will stop playing that sound ..etc.
Hope this help someone :D .
Want a player (easy enough to put up) that plays back a directory of mp3s in such a way that if you join at 3:33:33 pm, you hear what others hear, not track one. like a pseudo broadcast/stream. how do i achieve that - what looks nice / is probably minimizable / is easy?
i am trying to use mirvling but no such luck. any ideas?
It's unlikely you're going to find something to drop in place. Plus, this isn't typically handled on the client side of things. You neglected to specify what languages and what not that you are using, so I'll provide a general answer.
There are two methods to accomplish this.
Method 1: Encode the stream on the server
Basically with this, you create an audio stream on the server that is made up of the audio files being played back. The clients play an audio stream like any traditional "live" internet radio station, without knowledge of how the stream was created. You can use SHOUTcast/Icecast for the servers, and a number of different source stream encoders, such as Ices.
Method 2: Make the media available and let the clients figure it out
For this, you'll be starting from scratch. Have a JSON feed or similar served up that contains a playlist of the audio files that should be played and when. On the client side, you can use JWPlayer or similar, and seek to the desired position of the current track when it starts, and then play tracks in order from there.