Multiple ffmpeg in parallel, same input - multithreading

I'm running multiple instances of ffmpeg in parallel, to see if the cpu can handle them. I'm running something very simple like taking in input 60 seconds of an rtsp stream and copying it into a mp4 file.
ffmpeg -i rtsp://10.28.0.52/axis-media/media.amp -c copy -r 15 -t 60 \
"video02.mp4" < /dev/null > /dev/null 2>&1 &
I want to run 40 of these exact instances in parallel, but some just do not run. For example if I run that command 10 times, at least 2 or 3 just do not do anything, or are dropped instantly. Is it ffmpeg that limits multiple operation on the same input?
If I run 30 instances, it still drops four or five of those. What is happening?
Thanks in advance

Related

How to run few nohup in same time?

I have tool trimmings content from file. I need to use it on 33 files. One file is processing 2 hours.
I want to try run it 33 Times on same tame coz one instance used one core, on my machine I have 128 cores.
So I wrote script:
#!/bin/bash
FILES=/home/ab/raw/*
for f in $FILES
do
base = ${f##*/}
nohup /home/ab/trimmer -a /home/ab/trimmer/adapters.fa -o "OUT$base" $f
done
So main line:
I run trimmer, -a it's file with patterns to delete, -o it's new file as outpu (OUT+basename) and last $f is processing file.
My intention was that the script would run separate tasks for each file.
But unfortunately, after run it, only one nohup will be launched. In htop it's still only one core working at 100%.
How can I fix it?

How can I play audio file(.mp3, .flac, .wav) and then loop over it (mix every few seconds) another audio file(wav) using ffmpeg

I got two different commands.
ffmpeg -i input.mp3 -i second.mp3 -filter_complex "[0:a]atrim=end=10,asetpts=N/SR/TB[begin];[0:a]atrim=start=10,asetpts=N/SR/TB[end];[begin][1:a][end]concat=n=3:v=0:a=1[a]" -map "[a]" output
This command inserts second.mp3 into input.mp3. It seems to always keep the parameters of input.mp3. It inserts it in exact 10 seconds of input.mp3.
Here is the second command:
ffmpeg -i input.mp3 -i second.mp3 -filter_complex "[1:a]adelay=10000|10000[1a];[0:a][1a]amix=duration:first" output
This command is closer to my final goal. It plays input.mp3 and in exact 10 seconds it plays along second.mp3 without stopping input.mp3's sound.(I think that's called mixing?)
My final goal is to create final.mp3.
Its duration must always equal input.mp3 duration. It must keep the samplerate, the count of channels, etc of input.mp3
When playing final.mp3, it must play the whole input.mp3.
But each 10-15 seconds, it must play second.mp3 without stopping input.mp3.(mix)
It could be said that I must use "Second command" but in a loop.
It would be great if there is one-line command for that in ffmpeg.
I am working with flac, mp3 and wav and both of the commands were suitable for that.
For example:
input.mp3 could be 40 seconds long.
second.mp3 could be 2 seconds long.
When I play final.mp3 it will be 40 seconds long, but each 10-15 seconds(on random) it will play second.mp3 at the same time as input.mp3.
Sadly I have no experience with ffmpeg, both of the commands I got are answers to questions here in stackoverflow. Hope somebody can help me. Thank you!
I edned up, generating long ffmpeg code using php
-i input_mp3.mp3 -i second.wav -filter_complex "[1:a]adelay=2000|2000[1a];[1:a]adelay=19000|19000[2a];[1:a]adelay=34000|34000[3a];[1:a]adelay=51000|51000[4a];[1:a]adelay=62000|62000[5a];[1:a]adelay=72000|72000[6a];[1:a]adelay=85000|85000[7a];[1:a]adelay=95000|95000[8a];[1:a]adelay=106000|106000[9a];[1:a]adelay=123000|123000[10a];[1:a]adelay=139000|139000[11a];[1:a]adelay=154000|154000[12a];[1:a]adelay=170000|170000[13a];[1:a]adelay=184000|184000[14a];[1:a]adelay=197000|197000[15a];[1:a]adelay=212000|212000[16a];[1:a]adelay=224000|224000[17a];[1:a]adelay=234000|234000[18a];[1:a]adelay=248000|248000[19a];[1:a]adelay=262000|262000[20a];[1:a]adelay=272000|272000[21a];[1:a]adelay=288000|288000[22a];[0:a][1a][2a][3a][4a][5a][6a][7a][8a][9a][10a][11a][12a][13a][14a][15a][16a][17a][18a][19a][20a][21a][22a]amix=23:duration=first,dynaudnorm" output_mp3_dynaudnorm.mp3
-i input_wav.wav -i second.wav -filter_complex "[1:a]adelay=1000|1000[1a];[0:a][1a]amix=2:duration=first,dynaudnorm" output_wav_dynaudnorm.wav
-i input_flac.flac -i second.wav -filter_complex "[1:a]adelay=1000|1000[1a];[1:a]adelay=11000|11000[2a];[1:a]adelay=27000|27000[3a];[0:a][1a][2a][3a]amix=4:duration=first,dynaudnorm" output_flac_dynaudnorm.flac
Those kind of syntax seems to work. Also I added dynaudnorm to negate the negative effect of amix(FFMPEG amix filter volume issue with inputs of different duration)
Even thought it is claimed that dynaudnorm fixes the problem of amix, it is not completely true, at least not in my case, where I am using it ~30 times...
But the final command works. I will ask a new question how to improve the results.

WGET - Simultaneous connections are SLOW

I use the following command to append the browser's response from list of URLs into an according output file:
wget -i /Applications/MAMP/htdocs/data/urls.txt -O - \
>> /Applications/MAMP/htdocs/data/export.txt
This works fine and when finished it says:
Total wall clock time: 1h 49m 32s
Downloaded: 9999 files, 3.5M in 0.3s (28.5 MB/s)
In order to speed this up I used:
cat /Applications/MAMP/htdocs/data/urls.txt | \
tr -d '\r' | \
xargs -P 10 $(which wget) -i - -O - \
>> /Applications/MAMP/htdocs/data/export.txt
Which opens simultaneous connections making it a little faster:
Total wall clock time: 1h 40m 10s
Downloaded: 3943 files, 8.5M in 0.3s (28.5 MB/s)
As you can see, it somehow omits more than half of the files and takes approx. the same time to finish. I cannot guess why. What I want to do here is download 10 files at once (parallel processing) using xargs and jump to the next URL when the ‘STDOUT’ is finished. Am I missing something or can this be done elsewise?
On the other hand, can someone tell me what the limit that can be set is regarding the connections? It would really help to know how many connections my processor can handle without slowing down my system too much and even avoid some type of SYSTEM FAILURE.
My API Rate-Limiting is as follows:
Number of requests per minute 100
Number of mapping jobs in a single request 100
Total number of mapping jobs per minute 10,000
Have you tried GNU Parallel? It will be something like this:
parallel -a /Applications/MAMP/htdocs/data/urls.txt wget -O - > result.txt
You can use this to see what it will do without actually doing anything:
parallel --dry-run ...
And either of these to see progress:
parallel --progress ...
parallel --bar ...
As your input file seems to be a bit of a mess, you can strip carriage returns like this:
tr -d '\r' < /Applications/MAMP/htdocs/data/urls.txt | parallel wget {} -O - > result.txt
A few things:
I don't think you need the tr, unless there's something weird about your input file. xargs expects one item per line.
man xargs advises you to "Use the -n option with -P; otherwise
chances are that only one exec will be done."
You are using wget -i - telling wget to read URLs from stdin. But xargs will be supplying the URLs as parameters to wget.
To debug, substitute echo for wget and check how it's batching the parameters
So this should work:
cat urls.txt | \
xargs --max-procs=10 --max-args=100 wget --output-document=-
(I've preferred long params - --max-procs is -P. --max-args is -n)
See wget download with multiple simultaneous connections for alternative ways of doing the same thing, including GNU parallel and some dedicated multi-threading HTTP clients.
However, in most circumstances I would not expect parallelising to significantly increase your download rate.
In a typical use case, the bottleneck is likely to be your network link to the server. During a single-threaded download, you would expect to saturate the slowest link in that route. You may get very slight gains with two threads, because one thread can be downloading while the other is sending requests. But this will be a marginal gain.
So this approach is only likely to be worthwhile if you're fetching from multiple servers, and the slowest link in the route to some servers is not at the client end.

How to output state into multiple text in script of Linux?

I have multiple servers of Linux, where I need to test the performance of my program, here I want to output the system state when running programs. In script of linux, I use following to output:
top -b -d 5 > System.txt
iostat -d 8 > IO.txt
But unfortunately, only the system.txt can be produced, but there is no the IO.txt file, so that need to add some thing in script to make these two file exist together?
Use iostat command like this :-
iostat -d 8 1 > IO.txt.
In this 1 means the limit, how many time do you want to run the command get the result.

How to make a process run slower in linux?

I would like to keep a process running for a long time (e.g., more than half an hour). My program is gpg. If I encrypt a 500MB file using gpg elgamal encryption, it takes around 1-2 minutes (compression is turned off). To increase the running time, I can only create a file with a few GB, which is not desirable. Is there any other way to make this gpg program run for longer time?
I believe, by default, gpg takes its input from standard input and sends output to standard output.
So you could make it run forever with something like:
gpg --encrypt </dev/urandom >/dev/null
To use that for consuming CPU for an hour (for example), you could create a script like:
gpg --encrypt </dev/urandom >/dev/null &
pid=$!
sleep 3600
kill -9 ${pid}
https://unix.stackexchange.com/questions/18869/slow-down-a-process-without-affecting-other-processes
CPULimit might do what you need without affecting other processes:
http://www.cyberciti.biz/faq/cpu-usage-limiter-for-linux/
You start the program, then run cpulimit against the program name or PID, specifying what percentage you want it limited. Note the percentage is of all cores; so if you have 4 cores, you could use 400%.

Resources