I've looked at this post and this post among a few others. They provided some information as to how to get started, unfortunately my use case is slightly more complex.
I have pairs of video files that I run ffmpeg commands on. The command I run transfers metadata from the original files to the converted files and looks as follows:
christian$ ffmpeg -i file_1.mov -i file_1.mp4 -map 1 -map_metadata 0 -c copy file_1_fixed.mp4
This post explains what the command does, should an explanation be required. Basically the use case is that both the original file and the converted file have the same name, but different extensions. How would I go about writing a shell script that finds all such pairs in a directory and composes and runs the command specified above for each pair?
I assume, from a logical point of view that I would need to loop through all the pairs, get the file names, do some sort of string manipulation (if this is even possible with shell scripting) compose the command and run it.
Any resources you could point me towards would be deeply appreciated. Just to clarify, some pseudo code:
for (file in directory) {
string name = file.getname
string command = "ffmpeg -i " + name + ".mov -i " + name + ".mp4 -map 1
-map_metadata 0 -c copy " + name + "_fixed.mp4"
run(command)
}
Hope this makes sense, please let me know if I should clarify more. Thank you for reading.
As you tagged this question with bash I send you this sketch for a bash script. This should work in general but you may adjust it to you actual needs:
#!/usr/bin/bash
# for debugging remove the hash from next line:
#set -x
# loop over all .mov files
for FILE in *.mov; do
FILE_MP4="${FILE/.mov/.mp4}"
FILE_FIXED="${FILE/.mov/_fixed.mp4}"
ffmpeg -i "$FILE" -i "$FILE_MP4" -map 1 -map_metadata 0 -c copy "$FILE_FIXED"
done
Notes/Hints:
for FILE in *.mov loops over all files with extension .mov but no other files. This is good because it will even work if called multiple times in the same directory.
The for loop will search in the current directory. You may use cd to change to a specific directory. (Handling of absolute or relative file paths instead of names is also possible...)
The quotes are choosen with care. Quoting in bash is very powerful but definitely not easy.
To check this script, you may prefix your ffmpeg command with command echo. This is like a dry run. (You will see what would be called without the echo "prefix".)
For copying large files, I would like to try replacing in my shell scripts cp by cpio, because I hope to be able to fine-tune the performance by specifying explicit buffer sizes. The current development environment is Zsh on Cygwin on Windows 7, but by solution should also run on Linux.
My original copy command is
cp //some.share/some/file local_dir
I replaced it by
cpio -null -pd local_dir <<<//some.share/some/file
and I get the error message
//some.share/some/file: Invalid cross-device link
This surprises me. Why does cpio attempt to create a hard link here? After all, it is supposed to copy the file, not link to it!
I'm aware that I'm misusing cpio somewhat, in that cpio is supposed to copy archives, not individual files, but from my understanding of the -p command line switch, I thought that I could use it also for just copying files.
Im using iwatch to monitor changes in the directory - as soon as new video file is added to the directory I grab that file and using ffmpeg add overlay sound to it. Here is the script:
iwatch -e close_write -c "/root/bin/ffmpeg -i %f -i /var/www/video/sound.mp3 -map 0 -map 1 -codec copy -shortest /var/www/new/video/${%f:15}" /var/www/video
But I have a problem. I move newly created file to a different directory and I need to save it with the same file name, but iwatch has only %f variable which returns full path. Knowing that the first part of the path will always be "/var/www/video/" , I can use ${%f:15} to get substring with the file name.
But the script below doesn't work - bash says "bad substitution". So the problem is here - when I try to move the file to the new directory:
/var/www/new/video/${%f:15}
What is correct syntax, way to achieve my goal?
You can use the output of the basename command:
/var/www/new/video/`basename %f`
I am trying to migrate Perforce depot from Windows to Linux and having issues accessing files in linux after the migration. Following are the steps I followed to migrate:
On Windows I ran following commands:
p4d -r P4ROOT -xv
p4d -r P4ROOT -jc finalcheckpoint
Then copied the depot onto Ubuntu and ran following commands:
p4d -r P4ROOT -jr finalcheckpoint
p4d -r . -p localhost:1666
p4 verify -q //...
I didn't get any errors while running p4 verify, but when I try to checkout files I am getting error - Path not found.
Am I missing any step here? If any one has migrated from Windows to Linux could you please share the steps taken to migrate.
Thanks,
Vijay
There is a big difference moving from Windows to Linux - you are typically moving from a case-insensitive platform to a case-sensitive platform. There is a very good and detailed knowledge base article that details this on the main perforce.com web site: http://kb.perforce.com/article/75/cross-platform-perforce-server-migration
Your steps look like they are more or less correct, though you never updated the internal line-endings for the files. The KB article recommends this short shell + perl script:
find . -type f -name '*,v' -print -exec perl -p -i -e 's/\r\n/\n/' {} \;
Your path not found error is likely a mismatch in your client workspace definition. I would suggest creating a new one to test with and ensure the paths you are using are correctly specified paying special attention to any upper or lowercase characters.
You will need to use p4migrate tool
ftp://ftp.perforce.com/perforce/tools/p4-migrate/p4migrate.html
check the part
Migrating from Windows to Unix
I am trying to work out the proper process of installing with Wget, in this example I'll use Nginx.
# Download nginx to /tmp/ directory
wget http://nginx.org/download/nginx-1.3.6.tar.gz -r -P /tmp
# Extract nginx into /tmp/nginx directory
tar xzf nginx-1.3.6.tar.gz -C /tmp/nginx
# Configure it to be installed in opt
./configure --prefix=/opt/nginx
# Make it
make
# Make install
make install
# Clean up temp folder
rm -r /tmp/*
Is this the idealised process? Is there anything I can improve on?
First of all, you definitely seem to reinvent the wheel: if the problem that you want to solve is automated packaging / building software on target systems, then there are myriads of solutions available, in form of various package management systems, port builders, etc.
As for your shell script, there are a couple of things you should consider fixing:
Stuff like http://nginx.org/download/nginx-1.3.6.tar.gz or nginx-1.3.6.tar.gz are constants. Try to extract all constants in separate variables and use them to make maintaining this script a little bit easier, for example:
NAME=nginx
VERSION=1.3.6
FILENAME=$NAME-$VERSION.tar.gz
URL=http://nginx.org/download/$FILENAME
TMP_DIR=/tmp
INSTALL_PREFIX=/opt
wget "$URL" -r -P "$TMP_DIR"
tar xzf "$FILENAME" -C "$TMP_DIR/nginx"
You generally can't be 100% sure that wget exists on target deployment system. If you want to maximize portability, you can try to detect popular networking utilities, such as wget, curl, fetch or even lynx, links, w3m, etc.
Proper practices on using a temporary directory is a long separate question, but, generally, you'll need to adhere to 3 things:
One should somehow find out the temporary directory location. Generally, it's wrong to assume that /tmp is always a temporary directory, as it can be not mounted, it can be non-writable, if can be tmpfs filesystem which is full, etc, etc. Unfortunately, there's no portable and universal way to detect what temporary directory is. The very least one should do is to check out contents of $TMPDIR to make it possible for a user to point the script to proper temporary dir. Another possibly bright idea is a set of heuristic checks to make sure that it's possible to write to desired location (checking at least $TMPDIR, $HOME/tmp, /tmp, /var/tmp), there's decent amount of space available, etc.
One should create a temporary directory in a safe manner. On Linux systems, mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX is usually enough. On BSD-based systems, much more manual work needed, as default mktemp implementation is not particularly race-resistant.
One should clean up temporary directory after use. Cleaning should be done not only on a successful exit, but also in a case of failure. This can be remedied with using a signal trap and a special cleanup callback, for example:
# Cleanup: remove temporary files
cleanup()
{
local rc=$?
trap - EXIT
# Generally, it's the best to remove only the files that we
# know that we have created ourselves. Removal using recursive
# rm is not really safe.
rm -f "$LOCAL_TMP/some-file-we-had-created"
[ -d "$LOCAL_TMP" ] && rmdir "$LOCAL_TMP"
exit $rc
}
trap cleanup HUP PIPE INT QUIT TERM EXIT
# Create a local temporary directory
LOCAL_TMP=$(mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX)
# Use $LOCAL_TMP here
If you really want to use recursive rm, then using any * to glob files is a bad practice. If your directory would have more than several thousands of files, * would expand to too much arguments and overflow shell's command line buffer. I might even say that using any globbing without a good excuse is generally a bad practice. The rm line above should be rewritten at least as:
rm -f /tmp/nginx-1.3.6.tar.gz
rm -rf /tmp/nginx
Removing all subdirectories in /tmp (as in /tmp/*) is a very bad practice on a multi-user system, as you'll either get permission errors (you won't be able to remove other users' files) or you'll potentially heavily disrupt other people's work by removing actively used temporary files.
Some minor polishing:
POSIX-standard tar uses normal short UNIX options nowadays, i.e. tar -xvz, not tar xvz.
Modern GNU tar (and, AFAIR, BSD tar too) doesn't really need any of "uncompression" flags, such as -z, -j, -y, etc. It detects archive/compression format itself and tar -xf is sufficient to extract any of .tar / .tar.gz / .tar.bz2 tarballs.
That's the basic idea. You'll have to run the make install command as root (or the whole script if you want). Your rm -r /tmp/* should be rm -r /tmp/nginx because other commands might have stuff they're working on in the tmp directory.
It should also be noted that the chances that building from source like that will work with no modifications for a decently sized project is fairly low. Generally you will find you need to specify a path to a library explicitly or some code doesn't quite compile correctly on your distribution.