"batch" files in bash - linux

I want to make a "batch" file so to say for some bash commands (convert.sh). I think it would be best to describe the situation. i have a $#!^ ton of mp4 videos that i want converted into mp3's. it would take me an unreasonable amount of time to convert them using ffmpeg -i /root/name\ of\ video.mp4 /name\ of\ video.mp3 for every single video. not to mention the fact that all the file names are long and complicated so typos are a possibility. so i want to know how to make a shell script (for bash) that will take every file with the extension .mp4 and convert it to a .mp3 with the same name one by one. as in it converts one then when it done it moves on to the next one. im using a lightweight version of linux so any 3rd part soft probably wont work so i need to use ffmpeg...
many thanks in advance for any assistance you can provide
PS: i cant seem to get the formatting sytax on the website to work right so if somone can format this for me and maybe post a link to a manual on how it works that would be much appreciated =)
PPS: i understand that questions about using the ffmpeg command should be asked on superuser however since i dont so much have any questions about the specific command and this relates more to scripting a bash file i figure this is the right place

A bash for loop should do it for you in no time:
SRC_DIR=/root
DST_DIR=/somewhereelse
for FILE in ${SRC_DIR}/*.mp4
do
ffmpeg -i "${FILE}" "${DST_DIR}/$(basename \"${FILE}\" .mp4).mp3"
done
Sorry - I don't know the ffmpeg command line options, so I just copied exactly what's in your post.

1) use find:
find . -name \*.mp4 | xargs ./my_recode_script.sh
2) my_recode_script.sh - see this question
so you can easily change the extension for output file name
the rest is trivial scripting job:
ffmpeg -i $name $new_name # in my_recode_script.sh after changing extension
this is enough for one-time script, if you want something reusable, wrap it with yet another script which receive path to dir, extensions from which to which to recode and call other parts :)

Related

Interactive quiz in Bash (Multiple Q's)

I'm teaching an introductory Linux course and have abandoned the paper-based multiple-choice quizzes and have created interactive quizzes in Bash. My quiz script is functional, but kind of quick-and-dirty, and now I'm in the improvement phase and looking for suggestions.
First off, I'm not looking to automate the grading, which certainly simplifies things.
Currently, I have a different script file for each quiz, and the questions are hard-coded. That's obviously terrible, so I created a .txt file holding the questions, delimited by lines with "question 01" etc. I can loop through and use sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/p", but this prints the delimiter lines. I can pipe through sed "/^q/d" or head -n-1|tail -n+2 to get rid of them, but is there a better way?
Second issue: For questions where the answer is an actual command, I'm printing a [user]$ prompt, but for short-answer, I'm using a >. In my text file, for each question, the last line is the prompt to use. Initially, I was thinking I could store the question in a variable and |tail -1 it to get the prompt, but duh, when you store it it strips newlines. I want the cursor to immediately follow the prompt, so I either need to pass it to read -p or strip the final newline from the output. (Or create some marker in the file to differentiate between the $ and > prompt.) One thought I had was to store each question in a separate file and just cat it to display it, making sure there was no newline at the end. That might be kind of a pain to maintain, but it would solve both problems. Thoughts?
Now to how I'm actually running the quiz. This is a Fedora 20 box, and I tried copying bash and setuid-ing it to me so that it would be able to read the quiz script that the students couldn't normally read, but I couldn't get that to work. After some trial and error, I ended up copying touch and setuid-ing it to me, then using that to create their answer file in a "submit" directory with an ACL so new files have o=w so they can write to their answer file (in the quiz with >> echo) but not read it back or access the directory. The only major loophole I see with this is that they can delete their file by name and start the quiz over with no record of having done so. Since I'm not doing any automatic grading, I'm not terribly concerned with the students being able to read the script file, although if I'm storing the questions separately, I suppose I could make a copy of cat and setuid it to read in files that they can't access.
Also, I realize that Bash is not the best choice for this, and learning the required simple input/output for Python or something better would not take much effort. Perhaps that's my next step.
1) You could use
sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/ { //!p }"
Here // repeats the last attempted pattern, which is the opening pattern in the first line of the range and the closing pattern for the rest.
...by the way, if you really want to do this with sed, you better be damn sure that i is a number, or you'll run into code injection problems.
2) You can store multiline command output in a variable without problems. You just have to make sure you quote the variable everafter to avoid shell expansion on it. For example,
QUESTION=$(sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/ { //!p }" questions.txt)
echo -n "$QUESTION" # <-- the double quotes are important here.
The -n option to echo tells echo to not append a newline at the end, which should take care of your prompt problem.
3) Yes, well, hackery breeds more hackery. If you want to lock this down, the first order of business would be to not give students a shell on the test machine. You could put your script behind inetd and have the students fill it out with telnet or something, I suppose, but...really, why bash? If it were me, I'd knock something together with a web server and one of the several gazillion php web quiz frameworks. Although I also have to wonder why it's a problem if students can see the questions and the answers they gave. It's not like all students use the same account and can see each other's answers, is it? (is it?) Don't store an answer key on the same machine and you shouldn't have a problem.

How can I create a file in Linux in a way that when I open it, it is actually running a process

I have a set of .sph files which are actually audio .wav files plus some header.
I have a program called sph2pipe which converts these .sph files to normal audio .wav files.
I want to create some kind of symbolic link to these .sph files that when I read these links I would be actually reading the converted version of them.
Something like this:
ln -s "sph2pipe a.sph |" "a.wav"
ln -s "sph2pipe b.sph |" "b.wav"
So this way, I don't have to convert all audio files to .wav files and instead I just create links to .sph files and I want them to get converted on the fly.
I hope I made myself clear. I was thinking what I am looking for is a Named pipe (https://en.wikipedia.org/wiki/Named_pipe) but this would not be useful in my case since I need to read the .wav files several times.
EDIT-1: I don't have to use named pipes. I just thought this could be the solution.
Actually, in my case, these .wav files are needed to be read several times.
EDIT-2: I was wondering how Samba (or gvfs-smb) works. So the files are in the network but there is also a path available for them in my system like: /run/user/1000/gvfs/smb-share:server=10.100.98.54,share=db/4a0a010a.sph. Can I do something like this? (I read .sph files from a specific path and .wav files come out :) )
EDIT-3: I came up with this so far:
keep_running.py:
#!/usr/bin/python3
import subprocess
cmd = 'mkfifo 4a0a010a.wav'
subprocess.call(cmd, shell=True)
cmd = 'sph2pipe -f wav 4a0a010a.wv1 > 4a0a010a.wav'
while True:
subprocess.call(cmd, shell=True)
And in shell:
./keep_running.py &
play 4a0a010a.wav
play 4a0a010a.wav
play 4a0a010a.wav
I can use the audio file as many times as I want.
What do you think would be the limitations of this implementation?
Would I be limited by the number of the processes that I can spawn? Because it looks like I need to spawn a process for each file.
Don't do it, it's a bad idea.
If you insist anyway, perhaps just out of curiosity, here's a proof of concept.
mkfifo a.wav
sph2pipe a.sph >a.wav &
Now, the results are available once in a.wav but once you have consumed them, they are gone, and a new instance of the background process has to be started if you need to do it again.
Sounds to me like a simple Makefile would serve your use case better (create missing files, recreate files which need to be updated, potentially remove temporary targets when the main target has successfully been made).
No, a named pipe, or fifo(7), wants some existing process to write it (and another to read it). There is no magic that will start the writing process when some other process opens that fifo for reading.
You could provide your FUSE filesystem (whose actions would produce the sound data). I am not sure that it worth the effort in your case.
Remember that several processes can read or write the same file at once.
EDITED ANSWER
Or, if you don't have more than a couple of 1000 files, you can spawn a process for each fifo that keeps sending the file to it repeatedly like this:
for f in *.sph; do
mkfifo "${f}.wav"
(while :; do sph2pipe "$f" > "${f}.wav"; done) &
done
ORIGINAL ANSWER
I am not at my computer but can you generate the WAV files automatically, use them, then delete them...
for f in *.sph; do
sph2pipe "$f" > "${f}.wav"
done
Then use them,,,
Then delete
rm *.wav

Add comments next to files in Linux

I'm interested in simply adding a comment next to my files in Linux (Ubuntu). An example would be:
info user ... my_data.csv Raw data which was sent to me.
info user ... my_data_cleaned.csv Raw data with duplicates filtered.
info user ... my_data_top10.csv Cleaned data with only top 10 values selected for each ID.
So sort of the way you can comment commits in Git. I don't particularly care about searching on these tags, filtering them etc. Just seeings them when I list files in a directory. Bonus if the comments/tags follow the document around as I copy or move it.
Most filesystem types support extended attributes where you could store comments.
So for example to create a comment on "foo.file":
xattr -w user.comment "This is a comment" foo.file
The attributes can be copied/moved with the file just be aware that many utilities require special options to copy the extended attributes.
Then to list files with comments use a script or program that grabs the extended attribute. Here is a simple example to use as a starting point, it just lists the files in the current directory:
#!/bin/sh
ls -1 | while read -r FILE; do
comment=`xattr -p user.comment "$FILE" 2>/dev/null`
if [ -n "$comment" ]; then
echo "$FILE Comment: $comment"
else
echo "$FILE"
fi
done
The xattr command is really slow and poorly written (it doesn't even return error status) so I suggest something else if possible. Use setfattr and getfattr in a more complex script than what I have provided. Or maybe a custom ls command that is aware of the user.comment attribute.
This is a moderately serious challenge. Basically, you want to add attributes to files, keep the attributes when the file is copied or moved, and then modify ls to display the values of these attributes.
So, here's how I would attack the problem.
1) Store the information in a sqlLite database. You can probably get away with one table. The table should contain the complete path to the file, and your comment. I'd name the database something like ~/.dirinfo/dirinfo.db. I'd store it in a subfolder, because you may find later on that you need other information in this folder. It'd be nice to use inodes rather than pathnames, but they change too frequently. Still, you might be able to do something where you store both the inode and the pathname, and retrieve by pathname only if the retrieval by inode fails, in which case you'd then update the inode information.
2) write a bash script to create/read/update/delete the comment for a given file.
3) Write another bash function or script that works with ls. I wouldn't call it "ls" though, because you don't want to mess with all the command line options that are available to ls. You're going to be calling ls always as ls -1 in your script, possibly with some sort options, such as -t and/or -r. Anyway, your script will call ls -1 and loop through the output, displaying the file name, and the comment, which you'll look up using the script from 2). You may also want to add file size, but that's up to you.
4) write functions to replace mv and cp (and ln??). These would be wrapper functions that would update the information in your table, and then call the regular Unix versions of these commands, passing along any arguments received by the functions (i.e. "$#"). If you're really paranoid, you'd also do it for things like scp, which can be used (inefficiently) to copy files locally. Still, it's unlikely you'll catch all the possibilities. What if someone else does a mv on your file, who doesn't have the function you have? What if some script moves the file by calling /bin/mv? You can't easily get around these kinds of issues.
Or if you really wanted to get adventurous, you'd write some C/C++ code to do this. It'd be faster, and honestly not all that much more challenging, provided you understand fork() and exec(). I can't recall whether sqlite has a C API. I assume it does. You'd have to tangle with that, too, but since you only have one database, and one table, that shouldn't be too challenging.
You could do it in perl, too, but I'm not sure that it would be that much easier in perl, than in bash. Your actual code isn't that complex, and you're not likely to be doing any crazy regex stuff or string manipulations. There are just lots of small pieces to fit together.
Doing all of this is much more work than should be expected for a person answering a question here, but I've given you the overall design. Implementing it should be relatively easy if you follow the design above and can live with the constraints.

QT-FastStart Windows how to run it?

So I have a lot of mp4 files on my computer and I read that QT-FastStart is for moving the metadata from the end of the files to the beginning but how do I use or run it ?
Every time I drag and drop a file into qt-faststart, nothing seems to happen?
I downloaded the 64bit version from here:
https://web.archive.org/web/20140206214912/http://ffmpeg.zeranoe.com/blog/?p=59
Do I need a batch file or something or a specific command line parameter to make it run?
Note, QT-FastStart is described here https://manpages.debian.org/stretch/ffmpeg/qt-faststart.1.en.html
qt-faststart is a utility that rearranges a Quicktime file such that
the moov atom is in front of the data, thus facilitating network
streaming.
It can be used (perhaps among other purposes), for making a sample file when demonstrating an issue. One can take a large file, fix it with QT-FastStart, then use dd to cut a sample. And the sample should play. Whereas if you did dd without doing that then it wouldn't or may not play.
See Neil's answer qt-faststart infile.mp4 outfile.mp4
However, QT-FastStart has now been integrated into ffmpeg.
ffmpeg -i original.3gp -codec copy -movflags +faststart fixed.3gp
List item
simple. in CMD prompt run qt-faststart infile.mp4 outfile.mp4

Linux untar command shows file names as question marks

A while ago I had compressed an application using Linux "tar -cf" command. At that time some of the file names were in a different language.
Now when I uncompress using "tar -xf" it shows the file names in the other language as question marks.
Is there a way that when I uncompress it keeps the original file names as they were?
Your help is highly appreciated.
Good question ! It's expected that like any Unix command, tar could pipe its output to another program, if possible including filename data. A quick googling reveals that this is the case: as described in this blog post, GNU tar supports the --to-command parameter to write the output to a pipe, instead of directly operating on the directory.
http://osmanov-dev-notes.blogspot.com.br/2010/07/how-to-handle-filename-encodings-in.html
So it's a matter of writing a script to convert the filename to UTF-8, like it's done in the cited post. Another option, also described in the text, that becomes obvious after you read it is to simply extract everything and then write a script to convert every file in the directory. There's a trivial php script in the link that does this.
Finally, you can always write your own custom tar version with the help of scripting languages, and that's easy. Python, for example has the tarfile module built in the standard library:
http://docs.python.org/2/library/tarfile.html#examples
You could use TarFile.extractfile(), shutils.copyfileobj() and str.decode() in a loop to manually extract the files while changing the file name encoding.
References:
http://www.gnu.org/software/tar/manual/tar.html#SEC84
http://docs.python.org/2/library/tarfile.html
http://www.manpagez.com/man/1/iconv/

Resources