I need to capture video clips during automated tests wich runs over 12 hours.
My problem is that the movies get too big and i only
want small movie clips if an error occurs.
So my idea was to write c# tool - which buffers only some e.g. 3 minutes of a movie
and throws away the captured frames before the 3 minutes to find out what the reason of the error is.
If an error occures i want to save the 3 minute before the error occurs.
It would be nice if this happens in a compressed way. The recoding session continues and if the next error occures i want to save the next 3 minute clip and so on.
That means i have to capture a stream and make sure that only the last x minutes will be captured
to find out where the error comes from.
What also is important that dual monitors is supported when a vidoe is captured.
It should be possible to set the framerate.
The Trigger will be done via C# code.
What is the best way to do it?
How can I achieve this with c#?
Bernhard
I use Microsoft Expression Encoder 4 with Service Pack 2 (SP2) for recording my automated tests. Insert at the begin and end of every major function of your test start and stop commands. Delete the file in the next major function. This way only the last video is stored on harddrive and you can examine it after your script terminated with an error.
Related
I have 2 applescripts (saved as apps) that make webhook calls in a loop to control the volume of my stereo. Each script displays a dialog that asks for a number of ticks to tick the volume up or down, and it loops to make the webhook call each time.
Background: I wrote a program called pi_bose that runs on my raspberry pi to send commands to my Bose Series 12 stereo. It sends codes on the 28Mhz band using a wire as an antenna plugged into one of the GPIO ports. Node red receives the webhook calls and runs that script. But there are various things that can make it fail. The antenna can be loose because the pi has been bumped. Node red isn't running. The program has a small memory leak that causes a problem after having been used for about 6 months. And sometimes there's background interference that makes not every transmission work (I could probably use a longer antenna to address that I guess). But sometimes, whatever is playing on the stereo is just so soft that it's hard to detect the subtle change to the volume. And sometimes, it seems that either the webhook call happens slowly and the volume is changing - it just happens over the course of 20-30 seconds. So...
I know I could do the loop on the pi itself instead of repeating the webhook call, but I would like to see progress on the mac itself.
I'd like some sort of cue that gives me some feedback to let me know each time the webhook call happens. Like, a red dot on the AppleScript app icon or something in the corner of the screen that appears for a fraction of a second each time the webhook call is made.
Alternatively, I could make the script make some sort of sound, but I would rather not disrupt audibly whatever is playing at the time.
Does anyone know how to do that? Is it even possible to display an icon without a dialog window in applescript?
as I said in the title, I need to record my screen from an electron app.
my needs are:
high quality (720p or 1080p)
minimum size
record audio + screen + mic
low impact on PC hardware while recording
no need for any wait after the recorder stopped
by minimum size I mean about 400MB on 720p and 700MB on 1080p for a 3 to 4 hours recording. we already could achieve this by bandicam and obs and it's possible
I already tried:
the simple MediaStreamRecorder API using RecordRTC.Js; produces huge file sizes, like 1GB per hour for 720p video.
compressing the output video using FFmpeg; it can take up to 1 hour for 3 hours recording
save every chunk with 'ondataavailable' event and right after, run FFmpeg and convert and reduce the size and append all the compressed files (also by FFmpeg); there are two problems. 1, because of different PTS but it can be fixed by tunning compress command args. 2, the main problem is the audio data headers are only available in the first chunk and this approach causes a video that only has audio for the first few seconds
recording the video with FFmpeg itself; the end-users need to change some things manually (Stereo Mix), the configs are too complex, it causes the whole PC to work slower while recording (like fps drop; even if I set -threads to 1), in some cases after recording is finished it needs many times to wrap it all up
searched through the internet to find applications that can be used from the command line; I couldn't find much, the famous applications like bandicam and obs have command line args but there are not many args to play with and I can't set many options which leads to other problems
I don't know what else I can do, please tell me if u know a way or simple tool that can be used through CLI to achieve this and guide me through this
I end up using the portable mode of high-level 3d-party applications like obs-studio and adding them to our final package. I also created a js file to control the application using CLI
this way I could pre-set my options (such as crf value, etc) and now our average output size for a 3:30 hour value with 1080p resolution is about 700MB which is impressive
I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.
I am inserting timed metadata in a HLS (HTTP Live Stream) using id3taggenerator and mediafilesegmenter. I have followed the instructions from Jake's Blog.
First, I create the id3tag using id3taggenerator:
id3taggenerator -o text.id3 -t "video"
Then add the tag to the id3macro file:
0 id3 /path/to/file/text.id3
And segment the video and insert the id3 tags with mediafilesegmenter:
mediafilesegmenter -M /path/to/id3macro -I -B "my_video" video.mp4
However, the timed metadata is inserted at the wrong point in time. Instead of showing up at the beginning of the video (point in time 0), it is added with a delay of 10 s (give or take 0.05 seconds, sometimes more, sometimes less).
I've wrote a simple iOS player app that logs whenever it is notified of an id3 tag in the video. The app is notified after playing the video for around 10 seconds of the ID3 tag. I've also tried with another id3macro file, with multiple timed metadata inserted in the video (around 0s, 5s, 7s), all showing up with the same approximate delay. I have also changed with the duration of the segment to 5s, but each time it's the same result.
The mediafilesegmenter I am using is Beta Version 1.1(140602).
Can anyone else confirm this problem, or pin-point to what am I doing wrong here?
Cheers!
I can confirm that I experience the same issue, using the same version of mediafilesegmenter:
mediafilesegmenter: Beta Version 1.1(140602)
Moreover, I can see that the packet with ID3 is inserted in the right moment in the stream. Eg. if I specify a 10 second delay – I can see that my ID3 is inserted in the end of the first 10 second segment.
However, it appears 10 seconds later in iOS notifications.
I can see the following possible reasons:
mediafilesegmenter inserts metadata packet in the right place, but timestamp is delayed by 10 seconds for some reason. Therefore, clients (eg. iOS player) show the tag 10 seconds later. Apple tools are not well documented so it's hard to verify.
Maybe iOS player receives metadata in time (because I know the tag was included in previous segment file) but issues a notification with 10 second delay, for whatever reason.
I cannot dig further because I don't have any Flash/desktop HLS players that support in-stream ID3 tags. If I had one, I would check whether desktop player will display/process ID3 in time, without delay. Then, it would mean the problem is iOS, not mediafilesegmenter.
Another useful thing to do would be – extracting MPEG-TS frame with ID3 tag from the segment file, and checking headers, looking for any strange things there (eg. wrong timestamp).
Update:
I did some more research including reverse engineering of TS segments created with Apple tools, and it seems:
mediafilesegmenter starts PTS (presentation time stamps) from 10 seconds while, for example, ffmpeg starts from 0.
mediafilesegmenter adds ID3 frame at the correct place in TS file but with wrong PTS that is 10 seconds ahead of what was specified in meta file.
While the first issue doesn't seem to affect the playback (as far as I understand it's more important that PTS goes on continuously, not where it starts), the second is definitely an issue and the reason you/we are experiencing the problem.
Therefore, iOS player receives ID3 frame in time but since its PTS is 10 seconds ahead – it waits 10 seconds before issuing a notification. As far as I can say now – some other players simply ignore this ID3 frame because it's in the wrong place.
As a workaround, you can shift all ID3 files by 10 seconds in your meta file, but obviously, you won't be able to put anything in the beginning.
I have a setup where I open a connection to freeswitch through the ESL and start exchanging commands.
In one specific scenario I want for freeswitch to call me and record a message. So I call a phone number with sofia and park the call
originate {set some private variables and origination_caller_id_number}sofia/gateway// &park()
During the call I play a few messages
uuid_broadcast playback::
And listen to events waiting specific for DTMF tones so I can take action. Play another message or start recording
To stop a playback and start recording
uuid_break uuid_record start
I also playback the recorded file to the user using the same playback command
Now the issue, the first time a message is recorded it works fine, I can listen to it. After I record a new message on the same call nothing is recorded in the file. I can download the file to listen to it directly and still no sound. I see that the file is created and it's size is compatible with the length recorded but even looking with Audacity there is no audio in it.
What can be causing this and does anyone have an idea on how to fix it?
Thanks for the help!
this looks like a bug, probably worth submitting to Jira. Do you specify a different file name for a new recording?
I have reproduced this issue as well. I believe this is a bug in FreeSwitch. I am not aware of any work around.
If you want to hear audio in the second recording, you must make the recording LONGER than the first recording. I believe the reason for this is that the audio buffer from the first recording is still associated to the recording session and it is full of silence. The stale first audio buffer is saved at the beginning of the second audio recording. So if you want to hear anything from the second recording, make it a longer recording.