Audio Encoding software or script - audio

Does a program exist that will allow me to batch covert files and do the following:
a) Search for MP3's on my drive
b) transcode from 128 kbs/sec to 64 kbs/sec
c) switch from Stereo to Mono
d) save and overwrite the previous file
Or is their anyway I could write a script to perform this task on a windows desktop?

if you are on un*x, the tool of choice for batch-processing is find:
find /path/to/foo -name "*.mp3"
will give you all files matching "*.mp3" in /path/to/foo and all it's subdirectories.
sincefind is a beast, you probably want to check it's manpage.
you can pass a script to find that is called for each match. e.g. the following will do an "ls -l" on all files (excluding directories named e.g. "mysounds.mp3/"; that's what the -type f is for):
find /path/to/foo -type f -name "*.mp3" -exec ls -l \{\} \;
note, that the curly braces and the semicolon are escaped with backticks, in order to prevent the shell from interpreting those special characters.
if you have a script (named convertmp3.sh and which is sitting in your current working directory) that does an in-place conversion (overwriting the old file), you can do:
find /path/to/foo -type f -name "*.mp3" -exec ./convertmp3.sh \{\} \;
such a script could look like:
#!/bin/sh
INFILE="$1"
TMPFILE="${INFILE}.mp3"
ffmpeg -i "${INFILE}" -ac 1 -b 64k "${TMPFILE}" && mv "${TMPFILE}" "${INFILE}"
note that i'm using a temporary file here, because else ffmpeg would start overwriting the source file before it has fully read it, and thus stopping with an error.
also do not forget the quotes around the filenames, to protect against filenames with spaces and the like.
also note, that this script does not check whether the input file is already in the desired format, thus potentially re-encoding it.

Related

Batch file convert to Linux script

due to migrating of batch job to Linux server I have problem finding the equivalent of the following commands in Linux:
Y drive is a map drive to the NAS drive which is also connected to Ubuntu server /NAS/CCTV . Need to search every sub folders for all .264 files
Z drive is on the Ubuntu server itself. Just move every .mp4 files here, no folder here. Path on Ubuntu is /Share/CCTV/
Its just a simple script to convert the cctv capture .264 format to mp4 and move to server to be process and delete off any h264 files and any folder thats older than 1 day, the script will schedule to run every 3 mins.
I have ffmpeg installed on the Ubuntu server, just unable to find the for each file in the folders to do the same.
Also for the last for files command that delete folder older than 1 days
FOR /r y:\ %%F in (*.h264) do c:\scripts\ffmpeg -i %%F %%F.mp4
FOR /r y:\ %%F in (*.h264) do del %%F
FOR /r y:\ %%G in (*.mp4) do move %%G Z:\
forfiles -p "Y:\" -d -1 -c "cmd /c IF #isdir == TRUE rd /S /Q #path"
Appreciate any forms of help or point me to the right guide so I can rewrite it on the Linux server. I did try to search for for loop but all show me to count number, maybe I search wrongly.
Find all .h264 files (recursively)
find /NAS/CCTV -type f -name '*.h264'
Convert all such files to .mp4
while IFS= read -d '' -r file ; do
ffmpeg -i "$file" "$file".mp4
done < <(find /NAS/CCTV -type f -name '*.h264' -print0)
Note that this will create files called like filename.h264.mp4. This matches your batch file behavior. If you would prefer to replace the extension use ffmpeg -i "$file" "${file%.*}".mp4 instead and you will get a name like filename.h264.
Also move those mp4 files to another directory
while IFS= read -d '' -r file ; do
ffmpeg -i "$file" "$file".mp4
if [[ -f $file.mp4 ]] ; then
mv -f -- "$file".mp4 /Share/CCTV
fi
done < <(find /NAS/CCTV -type f -name '*.h264' -print0)
Delete old directories (recursively)
find /NAS/CCTV -type d -not -newermt '1 day ago' -exec rm -rf {} +
Documentation.
The find command recursively lists files according to criteria you specify. Any time you need to deal with files in multiple directories or very large numbers of files it is probably what you want to use. For safety against malicious file names it's important to -print0 so file names are delimited by null rather than newline, which then requires using the IFS= read -d '' construct to interpret later.
The while read variable ; do ... done construct reads data from input and assigns each record to the named variable. This allows each matching file to be handled one at a time inside the loop. The insides of the loop should be fairly obvious.
Again find is used to select files, but in this case the files are directories. The switches -not -newer select files which are not newer (in other words, files which are older) according to their m time, the modification time, compared against t, which in this case means that the next argument is text describing a time. Here you can use any expression understood by GNU date's -d switch, so I can write in plain English and it will work as expected.
As you embark on your shell scripting journey you should keep two things by your side:
shellcheck - Always runs scripts you write through shellcheck to catch basic errors.
Bash FAQ - The bash FAQ at wooledge.org. Most of the answers to questions you have not thought of yet will be here. For example FAQ 15 is highly relevant to this question.
for f in /NAS/CCTV/*.h264; do ffmpeg -i "$f" "$f".mp4; done
rm /NAS/CCTV/*.h264
mv /NAS/CCTV/*.mp4 /Share/CCTV
find /NAS/CCTV/ -type d -ctime +1 -exec rm -rf {} \;

Best way to tar and zip files meeting specific name criteria?

I'm writing a shell script on a Linux machine to be run via a crontab which is meant to move all files older than the current day to a new folder, and then tar and zip the entire folder. Seems like a simple task but for some reason, I'm running into all kinds of roadblocks. I'm new to this and self-taught so any help or redirection would be greatly appreciated.
Specific criteria for which files to archive:
All log files are in /home/tech/logs/ and all pdfs are in /home/tech/logs/pdf
All files are over a day old as indicated by the file name (file name does not include $CURRENT_DATE)
All files must be *.log or *.pdf (i.e. don't archive files that don't include $CURRENT_DATE if it isn't a log or pdf file.
Filename formatting specifics:
All the log file names are in home/tech/logs in the format NAME 00_20180510.log, and all the pdf files are in a "pdf" subdirectory (home/tech/logs/pdf) with the format NAME 00_20180510_00000000.pdf ("20180510" would be whenever the file was created and the 0's would be any number). I need to use the name rather than the file metadata for the creation date, and all files (pdf/log) whose name does not include the current date are "old". I also can't just move all files that don't contain $CURRENT_DATE in the name because it would take any non-*.pdf or *.log files with it.
Right now the script creates a new folder with a new pdf subdir for the old files (mkdir -p /home/tech/logs/$ARCHIVE_NAME/pdf). I then want to move the old logs into $ARCHIVE_NAME, and move all old pdfs from the original pdf subdirectory into $ARCHIVE_NAME/pdf.
Current code:
find /home/tech/logs -maxdepth 1 -name ( "*[^$CURRENT_DATE].log" "*.log" ) -exec mv -t "$ARCHIVE_NAME" '{}' ';'
find /home/tech/logs/pdf -maxdepth 1 -name ( "*[^$CURRENT_DATE]*.pdf" "*.pdf" ) -exec mv -t "$ARCHIVE_NAME/pdf" '{}' ';'
This hasn't been working because it treats the numbers in $CURRENT_DATE as a list of numbers to exclude rather than a literal string.
I've considered just using tar's exclude options like this:
tar -cvzPf "$ARCHIVE_NAME.tgz" --directory /home/tech/logs --exclude="$CURRENT_DATE" --no-unquote --recursion --remove-files --files-from="/home/tech/logs/"
But a) it doesn't work, and b) it would theoretically include all files that weren't *.pdf or *.log files, which would be a problem.
Am I overcomplicating this? Is there a better way to go about this?
I would go about this using bash's extended glob features, which allow you to negate a pattern:
#!/bin/bash
shopt -s extglob
mv /home/tech/logs/*!("$CURRENT_DATE")*.log "$ARCHIVE_NAME"
mv /home/tech/logs/pdf/*!("$CURRENT_DATE")*.pdf "$ARCHIVE_NAME"/pdf
With extglob enabled, !(pattern) expands to everything that doesn't match the pattern (or list of pipe-separated patterns).
Using find it should also be possible:
find /home/tech/logs -name '*.log' -not -name "*$CURRENT_DATE*" -exec mv -t "$ARCHIVE_NAME" {} +
Building on #tom-fenech answer, optimized to avoid many mv invocations:
find /home/tech/logs -maxdepth 1 -name '*.log' -not -name "*_${CURRENT_DATE?}.log" | \
xargs mv -t "${ARCHIVE_NAME?}"
An interesting feature, from processing the file thru pipes, is the ability to filter them with extra tools (aka grep :), which can (arguably) become more readable i.e. ->
find /home/tech/logs -maxdepth 1 -name '*.log' | fgrep -v "_${CURRENT_DATE?}" | \
xargs mv -t "${ARCHIVE_NAME?}"
Then similarly for the pdf ones, BTW you can "dry-run" above by just replacing mv by echo mv.
--jjo

Command Linux to copy files from a certain weekday

I am figuring out a command to copy files that are modified on a Saturday.
find -type f -printf '%Ta\t%p\n'
This way the line starts with the weekday.
When I combine this with a 'egrep' command using a regular expression (starts with "za") it shows only the files which start with "za".
find -type f -printf '%Ta\t%p\n' | egrep "^(za)"
("za" is a Dutch abbreviation for "zaterdag", which means Saturday,
This works just fine.
Now I want to copy the files with this command:
find -type f -printf '%Ta\t%p\n' -exec cp 'egrep "^(za)" *' /home/richard/test/ \;
Unfortunately it doesn't work.
Any suggestions?
The immediate problem is that -printf and -exec are independent of each other. You want to process the result of -printf to decide whether or not to actually run the -exec part. Also, of course, passing an expression in single quotes simply passes a static string, and does not evaluate the expression in any way.
The immediate fix to the evaluation problem is to use a command substitution instead of single quotes, but the problem that the -printf function's result is not available to the command substitution still remains (and anyway, the command substitution would happen before find runs, not while it runs).
A common workaround would be to pass a shell script snippet to -exec, but that still doesn't expose the -printf function to the -exec part.
find whatever -printf whatever -exec sh -c '
case $something in za*) cp "$1" "$0"; esac' "$DEST_DIR" {} \;
so we have to figure out a different way to pass the $something here.
(The above uses a cheap trick to pass the value of $DEST_DIR into the subshell so we don't have to export it. The first argument to sh -c ... ends up in $0.)
Here is a somewhat roundabout way to accomplish this. We create a format string which can be passed to sh for evaluation. In order to avoid pesky file names, we print the inode numbers of matching files, then pass those to a second instance of find for performing the actual copying.
find \( -false $(find -type f \
-printf 'case %Ta in za*) printf "%%s\\n" "-o -inum %i";; esac\n' |
sh) \) -exec cp -t "$DEST_DIR" \+
Using the inode number means any file name can be processed correctly (including one containing newlines, single or double quotes, etc) but may increase running time significantly, because we need two runs of find. If you have a large directory tree, you will probably want to refactor this for your particular scenario (maybe run only in the current directory, and create a wrapper to run it in every directory you want to examine ... thinking out loud here; not sure it helps actually).
This uses features of GNU find which are not available e.g. in *BSD (including OSX). If you are not on Linux, maybe consider installing the GNU tools.
What you can do is a shell expansion. Something like
cp $(find -type f -printf '%Ta\t%p\n' | egrep "^(za)") $DEST_DIR
Assuming that the result of your find and grep is just the filenames (and full paths, at that), this will copy all the files that match your criteria to whatever you set $DEST_DIR to.
EDIT As mentioned in the comments, this won't work if your filenames contain spaces. If that's the case, you can do something like this:
find -type f -printf '%Ta\t%p\n' | egrep "^(za)" | while read file; do cp "$file" $DEST_DIR; done

Manipulating strings (file extensions) in bash using find [duplicate]

This question already has answers here:
How do I rename the extension for a bunch of files?
(28 answers)
Closed 6 years ago.
I'm having trouble manipulating strings in bash. I wish to re-write extensions.
I have the following 2 files in a Downloads directory
Example 001.mkv
Example 002.mkv
Using the script below I always get the same filenames returned without .mkv rewritten into .mp4.
find /Downloads -name \*.mkv -execdir echo $(file={}; echo ${file/mkv/mp4};) \;
I understand this isn't all you need to re-format a file but this script is part of a larger script that is passed to FFMPEG.
Here is the full command with FFMPEG.
find /Downloads -name \*.mkv -execdir ffmpeg -i {} -vcodec copy -acodec copy $(file={}; echo ${file/mkv/mp4};) \;
The exec and execdir are generally intended to actually execute command not to echo/print info about the files found (print/printf).
There are several ways to do this and here's one.
You could first try using the rename command that can use regex substitution for renaming files. This would require all the files to be renamed to be in the same folder /Downloads: (syntax may very according to the implementation of rename that ships with your distro)
ls *.mkv
a.mkv b.mkv
rename .mkv .mp4 *.mkv
ls *.mp4
a.mp4
b.mp4
Let's suppose that the mkv files are also present in subdirectories of /Downloads:
find . -type f -name "*.mkv"
./sundir/d.mkv
./sundir/c.mkv
./a.mkv
./b.mkv
find -type f -name "*.mkv" -exec rename .mkv .mp4 {} \;
find . -type f -name "*.mp4"
./sundir/c.mp4
./sundir/d.mp4
./b.mp4
./a.mp4
You can try using bash -c to execute your command
find /Downloads -name \*.mkv -execdir bash -c 'file={}; echo ${file/.mkv/.mp4}' \;

Linux: how to replace all instances of a string with another in all files of a single type

I want to replace for example all instances of "123" with "321" contained within all .txt files in a folder (recursively).
I thought of doing this
sed -i 's/123/321/g' | find . -name \*.txt
but before possibly screwing all my files I would like to ask if it will work.
You have the sed and the find back to front. With GNU sed and the -i option, you could use:
find . -name '*.txt' -type f -exec sed -i s/123/321/g {} +
The find finds files with extension .txt and runs the sed -i command on groups of them (that's the + at the end; it's standard in POSIX 2008, but not all versions of find necessarily support it). In this example substitution, there's no danger of misinterpretation of the s/123/321/g command so I've not enclosed it in quotes. However, for simplicity and general safety, it is probably better to enclose the sed script in single quotes whenever possible.
You could also use xargs (and again using GNU extensions -print0 to find and -0 and -r to xargs):
find . -name '*.txt' -type f -print0 | xargs -0 -r sed -i 's/123/321/g'
The -r means 'do not run if there are no arguments' (so the find doesn't find anything). The -print0 and -0 work in tandem, generating file names ending with the C null byte '\0' instead of a newline, and avoiding misinterpretation of file names containing newlines, blanks and so on.
Note that before running the script on the real data, you can and should test it. Make a dummy directory (I usually call it junk), copy some sample files into the junk directory, change directory into the junk directory, and test your script on those files. Since they're copies, there's no harm done if something goes wrong. And you can simply remove everything in the directory afterwards: rm -fr junk should never cause you anguish.

Resources