New to linux and having problems.
I am trying to setup a system that will allow me to start multiple ffmpegs to convert live TV so I can archive certain programs. The source is a few TV cards which means I can encode multiple streams at the same time. The PC is an i7 8 core.
I have tried to write a program that uses threads to start multiple ffmpegs and capture all the ffmpeg messages so I can watch the time elapsed, and when this hits a predetermined time stops the task and ffmpeg and then waits for the next scheduled recording, but I'm stuck on the capture of the ffmpeg output.
See popen function.
Related
I am trying to play a video using omxplayer (but I could use a different player to solve the problem) on both HDMI outputs of a Raspberry PI 4 but without much success. Until now I tryed starting 2 processes or threads but the output is not synchronized. The test code is quite simple: it starts 2 processes (or 2 threads) calling a Python wrapper ove omxplayer, both load the video and put it on pause, then I send to both processes the play command but there is a delay between the command reception and the start of the video on the second process/thread. So, any idea or help is very welcome.
OK, I found the problem and the solution. I created a simple Python wrapper for omxplayer that when creates the process executing omxplayer puts omxplayer in pause sending the toggle_pause command, but if I pass the --no-keys argument to omxplayer it starts without any regard about the toggle_pause command so the 2 processes are not in sync. The solution is very simple: do not use the --no-keys argument...
as I said in the title, I need to record my screen from an electron app.
my needs are:
high quality (720p or 1080p)
minimum size
record audio + screen + mic
low impact on PC hardware while recording
no need for any wait after the recorder stopped
by minimum size I mean about 400MB on 720p and 700MB on 1080p for a 3 to 4 hours recording. we already could achieve this by bandicam and obs and it's possible
I already tried:
the simple MediaStreamRecorder API using RecordRTC.Js; produces huge file sizes, like 1GB per hour for 720p video.
compressing the output video using FFmpeg; it can take up to 1 hour for 3 hours recording
save every chunk with 'ondataavailable' event and right after, run FFmpeg and convert and reduce the size and append all the compressed files (also by FFmpeg); there are two problems. 1, because of different PTS but it can be fixed by tunning compress command args. 2, the main problem is the audio data headers are only available in the first chunk and this approach causes a video that only has audio for the first few seconds
recording the video with FFmpeg itself; the end-users need to change some things manually (Stereo Mix), the configs are too complex, it causes the whole PC to work slower while recording (like fps drop; even if I set -threads to 1), in some cases after recording is finished it needs many times to wrap it all up
searched through the internet to find applications that can be used from the command line; I couldn't find much, the famous applications like bandicam and obs have command line args but there are not many args to play with and I can't set many options which leads to other problems
I don't know what else I can do, please tell me if u know a way or simple tool that can be used through CLI to achieve this and guide me through this
I end up using the portable mode of high-level 3d-party applications like obs-studio and adding them to our final package. I also created a js file to control the application using CLI
this way I could pre-set my options (such as crf value, etc) and now our average output size for a 3:30 hour value with 1080p resolution is about 700MB which is impressive
I have a Python 3 script running on a Raspberry Pi (Buster) which writes some instrument data to my Nextion Display using the serial/UART interface. I have, for now, setup my code to sleep for 5 minutes after the current data are displayed. This working.
The Nextion Display is touch sensitive so that if I touch it, it will send a serial data string which can be read via my script and will tell me where on the screen it was touched.
Now, I would like to modify my code such that it will react to the touch-screen even during the sleep period. I could put the program into a tight loop instead of using time.sleep(300) and check the elapsed time and read the serial port during each loop. This sounds to me like I would be overworking the Pi and wasting CPU cycles. Is there a better way to pause certain sections of code while allowing other to continue?
I'm creating a computer vision project which requires me to capture images on a raspberry pi and send it over the network to a server to process it. The software that processes only accepts pictures and not videos but for a good user experience the faster the photos arrive the better the response time of the system. Currently, im struggling with capturing multiple images quickly and i've tried software such as fswebcam, motion , pygame.Camera and all have a delay of roughly 1 sec resulting in <=1fps. I would like to increase this to around 10 fps. In my current setup i run a bash script which takes a picture from a usb webcam and saves it on disk of the raspi and a separate piece of C code transfers the images over UDP sockets. Is there a fast way to achieve faster capture of frames and save them to disk?
I have a very complicated audio setup for a project. Here's what we have:
3 applications playing sound
2 applications recording sound
2 sound cards
I really don't really have the code to any of these applications. All I want to do is monitor and control the audio streams. Here are a few examples of operations I'd like to do while the applications are running:
Mute one of the incoming audio streams.
Have one of the incoming audio streams do a "solo" (be the only stream that can "talk").
Get a graph (about 30 seconds worth) of the audio that each stream produced.
Send one of the audio streams to soundcard #1, but all three audio streams to soundcard #2.
I would likely switch audio streams every 2 minutes or so with one of the operations listed above. A GUI would be preferred. I started looking at the sound systems in Linux and it gets extremely complex and I feel like there have been many new advances in the past few years. I see jack, pulseaudio, artsd, and several other packages. They all have some promise but where should I start? Is there something someone already built that can help?
PulseAudio should be able to let you do all that. You'll need to configure a custom pipeline for splitting the app's audio for task 4, and I'm not exactly certain how you'd accomplish task 3, but I do know that it's capable of all sorts of audio stream handling via its volume control (pavucontrol).
I use Jack, which is quite simple to install and use, even if it
requires more efforts to configure with Flash and Firefox ...
You can try the latest Ubuntu Studio distribution and see if it solves your
problem (for the GUI, look at "patchage").