Directshow continious capture - visual-c++

I have Mp4 Capture Application in direct show. In my application i need to capture 30 min(or say some dynamic value) video continuously
for the I made one WaitableTimer Routine , after 30 minutes i want to stop the capture and then start again ... but after i stop capture on next sample in start capture the stream not get added to the file. to start next capture , i have to release all the capture variables again get devices and build graph and then i can start capture.
Cant i simply stop capture , then rename the output file and again start capture?? is anything needed to add additional to do that??
Please help me on this.
Thanks
Edit :
Below is the graph i use for recording
Video Source --> x264vfw - H.264/MPEG-4 AVC Codec --------->GDCL MPEG-4 Multiplexer --> File Writer
|
Audio Source --> ACM Wrapper --> Monogram AAC Encoder --|

We do something similar to capture DV-Avi's. Have you tried to:
stop the graph
remove File-Writer
create new File-Writer (and configure)
connect File-Writer to mux
and start again
If this does not work then there is something wrong with the Muxer or one of the other filters. You can easily test this, just replace the Muxer with an audio and video-renderer and try to play, stop, play.
You can also just try another MP4 mux filter, like the monogram mp4 mux.

Related

Save StreamTitle tag in recording

Some streams have StreamTitle tag set and it changes from time to time. Is that information send only when it changes in the stream? For example in this stream StreamTitle is used to tell the artist and the name of current song:
ffprobe -v error -show_entries format_tags http://st.downtime.fi/sun.mp3
Is the StreamTitle tag exclusive to MP3 streams? Can one have it in Opus streams as well?
Can such information be added to the output file created when recording the stream by ffmpeg for example, to be able to see which song is playing when playing the recording later? Could another tag be used to tell program name which may also alter in time?

Add movflags to top of mp4 file without using ffmpeg for a live RTSP stream

Update:
I have a video player in browser which plays mp4 videos though websocket. The player only supports mp4 file. When i checked normal mp4 fiels does not play in the player, a mp4 file with a "moovflags faststart " will only play on that player. For a allready stored file , this will work properly.
But In case of an livestream(RTSP), using ffmpeg will only work once the RTSP connection has terminated since the "moovflags faststart " flags will work once a connection has terminated properly.
Hope the above statements makes more sense.
Due to this behavior, am checking if there is any way to get the moovflasg at first or something
I am having RTSP live source and i need it to convert the RTSP to a mp4 file which has moov flags in the begining of the file.
I have checked with openrtsp to take a mp4 dump of the rtsp, but it only adds moov flags and other info on the footer of the mp4(onlky when openrtsp has closes the rtsp stream).
Ffmpeg has " -movflags faststart" to move the footer info to the header of the mp4 container.
Since i am having a RTSP live source, the video data will be comming back to back and there wont be any termination. The above ffmpeg command only works once the rtsp stream has terminated.
Is there any way we can make a mp4 container which contains the mp4 footer info present in the header itself so that i can use it for a live source?
EDIT #1
I have video player which plays mp4 video files , it only support playback of a recorded mp4 file which is createtd using "-movflags faststart" , normal mp4 files does not play in that.
This is the player
https://github.com/sonysuqin/WasmVideoPlayer.
Since i am tryng to stream live video to the player, its not possible to use movflags faststart.
The mp4 header can not be added to the file before it is complete. It’s not possible because of how mp4 files are structured. The header needs to know the frame type, timestamp, size, and file offset of every frame in the file. That can’t be known until the file is complete. You can not stream an mp4 while it is being created. You need to use a protocol such as HLS or DASH to accomplish this.

SIM800C : Uploaded audio AMR file has noise when played during call

I am able to upload the AMR file to SIM800C successfully.
When I play the uploaded audio file during the call using the below command :
#if CALL_RECORDED_AUDIO
Serial1.print("AT+CMEDPLAY=1,C:\\REC\\");
// "Command Media PLAY" -> to play an audio if it is a recorded audio
#else
Serial1.print("AT+CMEDPLAY=1,C:\\User\\");
// "Command Media PLAY" -> to play an audio if it is a uploaded audio
#endif
Played audio always has noise, from C:\User\.
However if I record the audio during call and save it. Play the recorded audio during next call then there is no noise. ( By defining CALL_RECORDED_AUDIO in above code snippet)
according to the documentation of the sim800 it is necessary to play a sound wav during the call
Note
. mode 2 and 3 are not supported when playing audio file during call.
. The audio file can not be played duirng incoming call or outgoing call.
. Only support WAV, PCM, AMR and MP3 format.
. Only support WAV format with 8K 16bit during call.
page 201/202 of the sim800 guide
personally i did not suck having no sim800
I think the recording of a call must be in .WAV format
let me know if it works

HandbrakeCLI command lines

I'm trying to convert DVD iso files to mp4 using HandbrakeCLI. I use the following line in a batch file:
D:\HandBrakeCLI.exe -i "D:\input.iso" -o "D:\output.mp4" --no-markers --width "720" --height "480" --preset "HQ 480p30 Surround" --encoder "mpeg2" --audio-lang-list "eng"
When I do this, I must then extract the audio from the file, using the following line:
D:\eac3to\eac3to.exe "D:\output.mp4" "D:\output.wavs" -down16
However, when I attempt to extract the audio, I get the error message
The format of the source file could not be detected.
Is there anything wrong with my former line of code that's causing the mp4 to get screwed up?
Minor side question: I'm also trying to get handbrake to remove subtitles and also only keep English audio, do you know what code could be used for that? I started a bit there with the --audio-lang-list "eng" but I'm now sure what to do from there.
Thanks a lot in advance!
You need to use a valid audio format. .wavs is not valid. You have to use an available audio codec to output to the below for --aencoder. The default output audio for MP4 is .aac
av_aac
copy:aac
ac3
copy:ac3
eac3
copy:eac3
copy:truehd
copy:dts
copy:dtshd
mp3
copy:mp3
vorbis
flac16
flac24
copy:flac
opus
copy
Defaults for audio
av_mp4 = av_aac
av_mkv = mp3
You need to pass none for no subtitles
-s none
And define only eng track like you were doing
--audio-lang-list eng
Check out the Handbrake CLI Documentation for the command line code:
https://handbrake.fr/docs/en/latest/cli/cli-guide.html
You can also try using a different program once you extract the audio. A program like XMediaRecode. It can also remux audio and video and convert other audio formats to wav
https://www.videohelp.com/software/XMedia-Recode

FFmpeg library: Muxing audio from external file

I have successfully changed the muxing.c sample to use video frames that I generate on runtime.
I am trying now to replace the get_audio_frame function with a function that decodes an existing audio file, and writes its samples instead of the synthesized audio-samples in the example code.
I've tried using the "audio decoding" example to decode the audio file, but the not sure how / when to write the samples decoded.
I suggest to check the source of my Karaoke Lyrics Editor which is doing exactly what you need based on ffmpeg. See ffmpegvideoencoder.cpp, see createFile and encodeImage functions.

Resources