How to apply a filter to the pre recorded video file only for the certain duration? - gpuimage

Is it possible to apply a filter to the pre recorded video for a specified duration by modifying your code or by using any of the existing api’s you have written? Eg( Applying filter to the x1 - x2 seconds of X seconds video) Right now i refer to your SimplevideoFilefilter.
I also need to save the video. It would help me to save time in cropping and passing the video to GPUimage as NSURL.
Is it possible to record the video at a FPS rate of 120(or 60fps) fps using your code, for iphone5s, iphone4s, and older devices?

Related

how many maximum no. of channels in an audio file we can create with FFMPEG amerge filter?

how many maximum no. of channels in an audio file we can create with FFMPEG amerge filter?
We have a requirement to merge multiple single channel audio files into multi channel single audio file.
Each channel represents the speaker in the audio file.
I tried amerge filter and could do it upto 8 files. I am getting blank audio file when I try to do it for 10 audio files, and I think the FFMPEG amerge filter command doesn't produce any error either.
Can I create N no. of multi-channel audio files with N no. of files? Here N may be 100+? Is it possible?
I am new to this audio api etc. so any guidance is appreciated.
how many maximum no. of channels in an audio file we can create with FFMPEG amerge filter? We have a requirement to merge multiple single channel audio files into multi channel single audio file.
Max inputs is 64. According to ffmpeg -h filter=amerge:
inputs <int> ..F.A...... specify the number of inputs (from 1 to 64) (default 2)
Or look at the source code at libavfilter/af_amerge.c and refer to SWR_CH_MAX.
Can I create N no. of multi-channel audio files with N no. of files? Here N may be 100+? Is it possible?
Chain multiple amerge filters with a max of 64 inputs per filter. Or use the amix filter that has a max of 32767.

Merge multiple audio files into one file

I want to merge two audio files and produce one final file. For example if file1 has length of 5 minutes and file2 has length of 4 minutes, I want the result to be a single 5 minutes file, because both files will start from 0:00 seconds and will run together (i.e overlapping.)
You can use the APIs in the Windows.Media.Audio namespace to create audio graphs for audio routing, mixing, and processing scenarios. For how to create audio graphs please reference this article.
An audio graph is a set of interconnected audio nodes. The two audio files you want to merge supply the "audio input nodes", and "audio output nodes" are the destination single file for audio processed by the graph.
The scenario 4 of AudioCreatio official sample - Submix, just provide the feature you want. Provide two files it will output the mixed audio, but change the output node to AudioFileOutputNode for saving to a new file since the sample create AudioDeviceOutputNode for playing.

How to calculate total convertion duration before converting with FFMPEG in nodeJS

With FFMPEG in nodeJS,
I would like to convert a video with FFMPEG.
How can I calculate total convertion duration before processing the conversion ?
Example : How long time a 1 Go AVI movie takes to be converted in MKV ?
You can't know in advance the exact amount of time needed for executing the conversion.
If you know the total number of frames of the target file you can use this formula:
T_full_conversion_time = T_elapsed * T_total_frame_count/ T_converted_frames
You can use T_full_conversion_time and T_elapsed and estimate the remaining time.

Joining mp3 files to form a stream

I'm wondering if it's possible to, in real time, concatenate a series of mp3 files to form a live stream.
For example, in some directory I have file1.mp3, file2.mp3, file3.mp3 - each file is 1 minute in duration.
I want to load an mp3 stream which I could load in a web-browser or on a phone, etc which will join all these files together to form a 3 minute stream. However, say I'm 2 minutes into the stream and upload another file to that directory - file4.mp3 - that is also of 1 minute duration. I would want that to automatically be added to the end of my live stream, such that when file3.mp3 is finished file4.mp3 will start straight away.
I hope I explained myself well. I am just keen to know:
1) If there is a name for what I am trying to achieve?
2) Whether what I am doing is possible with current technologies.
I think HTTP Live Streaming is what you're looking for. http://en.m.wikipedia.org/wiki/HTTP_Live_Streaming

splitting a flac image into tracks

This is a follow up question to Flac samples calculation.
Do I implement the offset generated by that formula from the beginning of the file or after the metadata where the stream starts (here)?
My goal is to programmatically divide the file myself - largely as a learning exercise. My thought is that I would write down my flac header and metadata blocks based on values learned from the image and then the actual track I get from the master image using my cuesheet.
Currently in my code I can parse each metadata block and end up where the frames start.
Suppose you are trying to decode starting at M:S.F = 3:45.30. There are 75 frames (CDDA sectors) per second, and obviously there are 60 seconds per minute. To convert M:S.F from your cue sheet into a sample offset value, I would first calculate the number of CDDA sectors to the desired starting point: (((60 * 3) + 45) * 75) + 30 = 16,905. Since there are 75 sectors per second, assuming the audio is sampled at 44,100 Hz there are 44,100 / 75 = 588 audio samples per sector. So the desired audio sample offset where you will start decoding is 588 * 16,905 = 9,940,140.
The offset just calculated is an offset into the decompressed PCM samples, not into the compressed FLAC stream (nor in bytes). So for each FLAC frame, calculate the number of samples it contains and keep a running tally of your position. Skip FLAC frames until you find the one containing your starting audio sample. At this point you can start decoding the audio, throwing away any samples in the FLAC frame that you don't need.
FLAC also supports a SEEKTABLE block, the use of which would greatly speed up (and alter) the process I just described. If you haven't already you can look at the implementation of the reference decoder.

Resources