why 'read' command in shell script is missing initial characters? [duplicate] - linux

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 3 years ago.
I have the following shell script and it is missing some initial characters (it misses initial couple of characters, so far in my observation) from each line except the first line.
And this happens only when I use the ffmpeg command. Otherwise, it is fine. But this command does the actual task in this script.
Why is it so and what is the fix?
#!/bin/bash
while read line; do
printf "%s\n" "$line"
ifile=$line
printf "%s\n" "$ifile"
ofile=abc_$line
printf "%s\n" "$ofile"
############### Problem is the following command: ##########
ffmpeg -y -i $ifile -c:v libx264rgb -b:v 512k -bf 0 -pix_fmt rgb24 -r 25 -strict -2 $ofile
##########rest is fine##########
echo $ifile
done < file_list

This is pretty well explained in this post I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!. When reading a file line by line, if a command inside the loop also reads stdin, it can exhaust the input file. In your case ffmpeg also reads from stdin.
The most common symptom of this is a while read loop only running once, even though the input contains many lines. This is because the rest of the lines are swallowed by the offending command. The most common fix for the problem is to close the stdin of the ffmpeg by doing < /dev/null
ffmpeg -y -i "$ifile" -c:v libx264rgb -b:v 512k -bf 0 -pix_fmt rgb24 -r 25 -strict -2 "$ofile" < /dev/null
or use another file descriptor other than standard input
while read -r line <&3; do
ifile="$line"
ofile="abc_${line}"
ffmpeg -y -i "$ifile" -c:v libx264rgb -b:v 512k -bf 0 -pix_fmt rgb24 -r 25 -strict -2 "$ofile"
done 3<file
Or your problem could altogether be a case of the input file having DOS style line endings carried over from a DOS environment. You can check that out by running the file command on the input file (file file_list) which could show CRLF line terminators. In such case do a clean-up of the input file as dos2unix file_list and re-run your script.

Related

Is there problem with 'read' command in Bash, or in Bash itself when using multiprocessing, or may be I make some mistake? [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 2 years ago.
I have three .wav files in my folder and I want to convert them into .mp3 with ffmpeg.
I wrote this bash script, but when I execute it, only the first one is converted to mp3.
What should I do to make script keep going through my files?
This is the script:
#!/bin/bash
find . -name '*.wav' | while read f; do
ffmpeg -i "$f" -ab 320k -ac 2 "${f%.*}.mp3"
done
Use the -nostdin flag to disable interactive mode,
ffmpeg -nostdin -i "$f" -ab 320k -ac 2 "${f%.*}.mp3"
or have ffmpeg read its input from the controlling terminal instead of stdin.
ffmpeg -i "$f" -ab 320k -ac 2 "${f%.*}.mp3" </dev/tty
See the -stdin/-nostdin flags in the ffmpeg documentation
If you do need find (for looking in subdirectories or performing more advanced filtering), try this:
find ./ -name "*.wav" -exec sh -c 'ffmpeg -i "$1" -ab 320k -ac 2 "$(basename "$1" wav).mp3"' _ {} \;
Piping the output of find to the while loop has two drawbacks:
It fails in the (probably rare) situation where a matched filename contains a newline character.
ffmpeg, for some reason unknown to me, will read from standard input, which interferes with the read command. This is easy to fix, by simply redirecting standard input from /dev/null, i.e. find ... | while read f; do ffmpeg ... < /dev/null; done.
In any case, don't store commands in variable names and evaluate them using eval. It's dangerous and a bad habit to get into. Use a shell function if you really need to factor out the actual command line.
No reason for find, just use bash wildcard globbing
#!/bin/bash
for name in *.wav; do
ffmpeg -i "$name" -ab 320k -ac 2 "${name%.*}.mp3"
done

ffmpeg doesn't accept input in script

this is a beginner's question but i can't figure out the answer after looking into it for several days:
I want ffmpeg to extract the audio portion of a video and save it in an .ogg container. If i run the following command in terminal it works as expected:
ffmpeg -i example.webm -vn -acodec copy example.ogg
For convenience, i want to do this in a script. However, if i pass a variable to ffmpeg it apparently just considers the first word and produces the error "No such file or directory".
I noticed that my terminal escapes spaces by a \ so i included this in my script. This doesn't solve the problem though.
Can someone please explain to me, why ffmpeg doesn't consider the whole variable that is passed to it in a script while working correctly when getting passed the same content in the terminal?
This is my script that passes the filename with spaces escaped by \ to ffmpeg:
#!/bin/bash
titelschr=$(echo $# | sed "s/ /\\\ /g")
titelohne=$(echo $titelschr | cut -d. -f 1)
titelogg=$(echo -e ${titelohne}.ogg)
ffmpeg -i $titelschr -vn -acodec copy $titelogg
Thank you very much in advance!
You need to quote variable expansions, try this :
#!/usr/bin/env bash
titelschr=$1
titelogg="${titelschr%.*}.ogg"
ffmpeg -i "$titelschr" -vn -acodec copy "$titelogg"
call with :
bash test.sh "Some video file.mp4"
This way, you don't need to escape spaces.

Using ffmpeg to split MP3 file to multiple equally sound length files

How to use the command line tool ffmpeg on Windows to split a sound file to multiple sound files without changing the sound properties same everything each one is fixed 30 seconds length. I got this manual example from here:
ffmpeg -i long.mp3 -acodec copy -ss 00:00:00 -t 00:00:30 half1.mp3
ffmpeg -i long.mp3 -acodec copy -ss 00:00:30 -t 00:00:30 half2.mp3
But is there a way to tell it to split the input file to equally sound files each one is 30 seconds and the last one is the remaining what ever length.
You can use the segment muxer.
ffmpeg -i long.mp3 -acodec copy -vn -f segment -segment_time 30 half%d.mp3
Add -segment_start_number 1 to start segment numbering from 1.

How to tell ffmpeg to loop through all files in directory in order

ffmpeg has concat option for this but all streams start working really bad and breaking sound after a day of streaming.
I tried looking at loops but i couldnt figure out how to execute a loop with ffmpeg command so it transcodes all files in 1 directory
/lely/ffmpeg -y -re -i /home/ftp/kid1.mp4 -vcodec copy -acodec copy -dts_delta_threshold 1000 -ar 44100 -ab 32k -f flv rtmp://10.0.0.17:1935/live/kid
In folder /home/ftp/ there are files kid1, kid2, kid3 - all *.mp4 files
So basically i would like a loop to change the input to next file every time previous ends.
Maybe you could use find and xargs to help you feed the files for ffmpeg:
find /home/ftp -name "*.mp4" | xargs -I $ /lely/ffmpeg -y -re -i $ -vcodec copy -acodec copy -dts_delta_threshold 1000 -ar 44100 -ab 32k -f flv rtmp://10.0.0.17:1935/live/kid
Here you first ask find to look for all mp3 files in /home/ftp.
Then you feed the results to xargs. For xargs you tell it to replace input it receives with token $ in your ffmpeg string.
You can concatenate the video files to a "named pipe" and use the pipe as a source for ffmpeg.
For example:
mkfifo pipeFile # create a FIFO file (named pipe)
cat $(find /home/ftp -name "*.mp4") > pipeFile & # concatenate video files do the pipe (do not forget the "&" for running in background)
/lely/ffmpeg -y -re -i pipeFile -vcodec copy -acodec copy -dts_delta_threshold 1000 -ar 44100 -ab 32k -f flv rtmp://10.0.0.17:1935/live/kid # run ffmpeg with the pipe as the input
Notes:
The order of files in the input will be that the find generates. You can add a "sort" command after the find to produce files in a sorted manner.
I have not tested this, since a I do not have ffmpeg installed. However, it should work :-)

How to script this, so output is used as input?

I would like to script this command
ffmpeg -i concat:file1.mp3\|file2.mp3 -acodec copy output.mp3
which merges file1.mp3 and file2.mp3 to become output.mp3.
The problem is that I have a lot more than 2 files that I would like to merge.
Example
ffmpeg -i concat:file1.mp3\|file2.mp3 -acodec copy output1.mp3
ffmpeg -i concat:output1.mp3\|file3.mp3 -acodec copy output2.mp3
ffmpeg -i concat:output2.mp3\|file4.mp3 -acodec copy output3.mp3
ffmpeg -i concat:output3.mp3\|file5.mp3 -acodec copy output4.mp3
output4.mp3 is the result I am looking for.
The files are not actually nicely called "file" adn then a number, but ls lists them in the order they should be merged in.
Question
How can this be scripted, so I can execute it in a directory with either an even or odd number of files?
if ffmpeg supports more then two files and no file contains |, and there are not too many, you can do:
ffmpeg -i concat:"$(ls|tr '\n' '|')" -acodec copy out.mp3
if not:
for cfile in *.mp3; do
ffmpeg -i concat:myout.mp3tmp1\|$cfile -acodec copy myout.mp3tmp2
mv myout.mp3tmp2 myout.mp3tmp1
done
mv myout.mp3tmp1 <your final file name>
If you can just concatenate all files in one wash, that'd be best. But a generic answer for your Bash question:
ffmpeg -i concat:file1.mp3\|file2.mp3 -acodec copy output1.mp3
for i in $(seq 1 10); do
ffmpeg -i concat:output${i}.mp3\|file$((i + 2)).mp3 -acodec copy output$((i + 1)).mp3
done
Here 10 is two less than your total number of input files.

Resources