I am new to python. I have coded the endless loop which gives me a string every 2 seconds. Now, I have to use the strings coming every 2 seconds to trigger a video player software (Content management where videos can be played with a playlist). This more like an integration of the video media player and python.
For example, There are 10 videos saved in the media player and every video should be played once the player received the triggered from python.
Is it possible to trigger the media software to play video using the python output key? If yes then please advise me the procedure.
Note: the media player have many interfaces.
Is there something I shall check in media player?
I am looking forward to seeing the positive response on my query. Thanks in advance
Best Regards,
Related
I am trying to encode 2 videos side by side, sync'd by the audio of the 2 clips. I can successfully encode the 2 videos side by side and select the audio from one of the input streams. However the system we are using to record the 2 videos does not start and stop the recording at the same time (could be up to a second different between cameras). Basically we are using a CCTV system to capture what's going on in a room from multiple angles. We export the 2 cameras between 2 timestamps and due to the way the system records the videos the start of the 2 clips are not the same point in time.
e.g. Export videos between 09:00:00:000 and 09:10:00:000
Video 1 - exports from 08:59:59:123 to 09:10:00:123
Video 2 - exports from 08:59:59:789 to 09:10:00:789
Therefore when video 1 and video 2 are stitched together side by side, they are out of sync by 666ms (which is very noticeable in the encoded video)
Both input streams have (near) identical audio and are both in the exact same format. We are currently placing these videos into Premiere Pro and syncing these videos by the audio and exporting them side by side, however we have a project where we need to do a lot of these in quick succession and this is not really an option. We need to look at scripting this.
Does anyone know if FFMPEG can do this? Or anything else?
Any info would be greatly appreciated.
You can use audio-offset-finder in bash file to calculate offset, cut of the head from one of the video, stitch them together ( like stated here ).
You would need to extract audio streams into separate files and use finder to calculate offset.
offset=`audio-offset-finder --find-offset-of file1.wav --within file2.wav`
I am working on a film analysis program, which retrieves data in realtime from a movie, that is playing in the same sketch. For analysing the sound I tried the minim library, but I can't figure out, how to get the audio signal from the movie. All I could do was accessing an audio file, I was loading into the sketch manually, or the line-in through the mic.
Thanks a lot!
Although GStreamer (used by the processing-video library) has access to audio, the processing-video library itself doesn't expose that at the moment.
You will need to use workarounds at the moment:
Extract audio from your movie and load straight into minim. (you can trigger audio playback at the same time as movie playback if you need to)
or use a tool to use system audio output as an input (minim get line in). On OSX you can use Soundflower. Another option is JACK and it's patch interface.
I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.
I need to develope an application which can make a continuous recording of some video or audio streams.
Other than record, the application should:
let the user see the recordings selecting a stream and a date/time
cut some part of recordings regarding to some schedules
let the user cut a piece of recording selecting using the web interface
My doubt is how to record and save the piece of videos. I don't want to wait the file to be closed to let the user play it, so I would use a progressive file type.
The application must work on Linux.
Could you help me, please?
Thank you very much!
I am new in j2me developing world.
I just want to know that how to get audio frequency from the audio recording application which stores data in .amr file.
Please help me, I tried a lot, but I am helpless.
So any idea regarding this will be appreciated.
thanks in advance.
im gonna ad here what i have found from the other sites that may be useful to you and me(as a newbie)
http://www.developer.nokia.com/Community/Discussion/showthread.php?154169-Getting-Recorded-Audio-Frequency-in-J2ME
If you want frequency of sound in Hz then it is actually not a single value but a series of values as a function of time.
You will have to calculate fourier transform of the sound samples which will give you frequency.
Read about this on wikipedia on how to calculate fourier transform and frequency graph...
http://www.developer.nokia.com/Community/Discussion/showthread.php?95262-Frequency-Analysis-in-J2ME-MMAPI
this forum says something about fft(fast fourrier transform) and analysing recorded amr sound rather than processing live stream and provides 3 link about fft which are right underneat this line have a look at them:..
look at the site mobile-tuner.com. (im new too. in fact i know nothing about java.)
but the site says that tuner function enabled phones are s60 phones. i was trying to write guitar tuner program. since my phone is nokia 5310 express music which is s40 i gave up.
so good luck to you
note: javax.microedition.media.control.RecordControl
--i don't know too much but i have a hunch about that ""Record Control"" class or function is related to audio frequency function in j2me. and the frequency analysis thing is inside the "sound processing"