I am very much a beginner at this and have searched for answers my question but have not found any that I understand how to implement. Any help would be greatly appreciated.
I have a script:
FILE$=`ls ~/Desktop/File_Converted/`
mkdir /tmp/$FILE
mv ~/Desktop/File_Converted/* /tmp/$FILE/
So I can use Applescript to say when a file is dropped into this desktop folder, create a temp directory, move the file there and the do other stuff. I then delete the temp directory. This is fine as far as it goes, but the problem is that if another file is dropped into File_Converted directory before I am doing doing stuff to the file I am currently working with it will change the value of the $FILE variable before the script has completed operating on the current file.
What I'd like to do is use a variable set up where the variable is, say, $FILE1. I check to see if $FILE1 is defined and, if not, use it. If it is defined, then try $FILE2, etc... In the end, when I am done, I want to reclaim the variable so $FILE1 get set back to null again and the next file dropped into the File_Converted folder can use it again.
Any help would be greatly appreciated. I'm new to this so I don't know where to begin.
Thanks!
Dan
Your question is a little difficult to parse, but I think you're not really understanding shell globs or looping constructs. The globs are expanded based on what's there now, not what might be there earlier or later.
DIR=$(mktemp -d)
mv ~/Desktop/File_Converted/* "$DIR"
cd "$DIR"
for file in *; do
: # whatever you want to do to "$file"
done
You don't need a LIFO -- multiple copies of the script run for different events won't have conflict over their variable names. What they will conflict on is shared temporary directories, and you should use mktemp -d to create a temporary directory with a new, unique, and guaranteed-nonconflicting name every time your script is run.
tempdir=$(mktemp -t -d mytemp.XXXXXX)
mv ~/Desktop/File_Converted/* "$tempdir"
cd "$tempdir"
for f in *; do
...whatever...
done
What you describe is a classic race condition, in which it is not clear that one operation will finish before a conflicting operation starts. These are not easy to handle, but you will learn so much about scripting and programming by handling them that it is well worth the effort to do so, even just for learning's sake.
I would recommend that you start by reviewing the lockfile or flock manpage. Try some experiments. It looks as though you probably have the right aptitude for this, for you are asking exactly the right questions.
By the way, I suspect that you want to kill the $ in
FILE$=`ls ~/Desktop/File_Converted/`
Incidentally, #CharlesDuffy correctly observes that "using ls in scripts is indicative of something being done wrong in and of itself. See mywiki.wooledge.org/ParsingLs and mywiki.wooledge.org/BashPitfalls." One suspects that the suggested lockfile exercise will clear up both points, though it will probably take you several hours to work through it.
Related
I am writing a bash script that looks at each file in a directory and does some sort of action to it. It's supposed to look something like this (maybe?).
for file in "$dir"* ; do
something
done
Cool, right? The problem is, this directory is being updated frequently (with new files). There is no guarantee that, at some point, I will technically be done with all the files in the dir (therefore exiting the for-loop), but not actually done feeding the directory with extra files. There is no guarantee that I will never be done feeding the directory (well... take that with a grain of salt).
I do NOT want to process the same file more than once.
I was thinking of making a while loop that runs forever and keeps updating some file-list A, while making another file-list B that keeps track of all the files I already processed, and the first file in file-list A that is not in file-list B gets processed.
Is there a better method? Does this method even work? Thanks
Edit: Mandatory "I am bash newb"
#Barmar has a good suggestion. One way to handle this is using inotify to watch for new files. After installing the inotify-tools on your system, you can use the inotifywait command to feed new-file events into a loop.
You may start with something like:
inotifywait -m -e MOVED_TO,CLOSED_WRITE myfolder |
while read dir events file, do
echo "Processing file $file"
...do something with $dir/$file...
mv $dir/$file /some/place/for/processed/files
done
This inotifywait command will generate events for (a) files that are moved into the directory and (b) files that are closed after being opened for writing. This will generally get you what you want, but there are always corner cases that depend on your particular application.
The output of inotifywait looks something like:
tmp/work/ CLOSE_WRITE,CLOSE file1
tmp/work/ MOVED_TO file2
I have a RedHat linux box and I had written a script in the past to move files from one location to another with a specific text in the body of the file.
I typically only write scripts once a year so every year I forget more and more... That being said,
Last year I wrote this script and used it and it worked.
For some reason, I can not get it to work today and I know it's a simple issue and I shouldn't even be asking for help but for some reason I'm just not looking at it correctly today.
Here is the script.
ls -1 /var/text.old | while read file
do
grep -q "to.move" $file && mv $file /var/text.old/TBD
done
I'm listing all the files inside the /var/text.old directory.
I'm reading each file
then I'm grep'ing for "to.move" and holing the results
then I'm moving the resulting found files to the folder /var/text.old/TBD
I am an admin and I have rights to the above files and folders.
I can see the data in each file
I can mv them manually
I have use pwd to grab the correct spelling of the directory.
If anyone can just help me to see what the heck I'm missing here that would really make my day.
Thanks in advance.
UPDATE:
The files I need to move do not have Whitespaces.
The Error I'm getting is as follows:
grep: 9829563.msg: No such file or directory
NOTE: the file "982953.msg" is one of the files I need to move.
Also note: I'm getting this error for every file in the directory that I'm listing.
You didn't post any error, but I'm gonna take a guess and say that you have a filename with a space or special shell character.
Let's say you have 3 files, and ls -1 gives us:
hello
world
hey there
Now, while splits on the value of the special $IFS variable, which is set to <space><tab><newline> by default.
So instead of looping of 3 values like you expect (hello, world, and hey there), you loop over 4 values (hello, world, hey, and there).
To fix this, we can do 2 things:
Set IFS to only a newline:
IFS="
"
ls -1 /var/text.old | while read file
...
In general, I like setting IFS to a newline at the start of the script, since I consider this to be slightly "safer", but opinions on this probably vary.
But much better is to not parse the output of ls, and use for:
for file in /var/text.old/*`; do
This won't fork any external processes (piping to ls to while starts 2), and behaves "less surprising" in other ways. See here for some examples.
The second problem is that you're not quoting $file. You should always quote pathnames with double quoted: "$file" for the same reasons. If $file has a space (or a special shell character, such as *, the meaning of your command changes:
file=hey\ *
mv $file /var/text.old/TBD
Becomes:
mv hey * /var/text.old/TBD
Which is obviously very different from what you intended! What you intended was:
mv "hey *" /var/text.old/TBD
I have a custom linux prompt which displays various useful nuggets of information. As I use SVN in my daily job I thought it would be nice to further customize my prompt with information as to the current workspace URL. This was mostly prompted by a recent case where I had switched to a branch then forgot I had done so. Confusion abounded so, in a bid to avoid this happening again, I thought this seemed like a good idea.
This has already been achieved by others so I could just follow their examples but I also like to work things out from basic principles. One thing that I observed about other peoples solutions was that they tended to execute 'svn info' with no regard to context. Not a problem in and of itself but I thought it might be nice to test for the presence of the ubiquitous '.svn' directory before invoking 'svn info'.
I arrived at this partial solution:
if [ -d './.svn' ] ; then svn info | sed -n -e '/^URL/ s/.*svn//p' ; fi;
In the presence of a '.svn' directory I invoke 'svn info' then use sed to spit out the portion of the URL in which I am interested.
The problem comes however from the fact that, since svn 1.7, '.svn' is not ubiquitous!
I had thought that I might replace the test for the directory with a call to 'find' to perform a reverse directory search to search up the directory tree ... except there doesn't appear to be such an ability.
Other than dropping the test for '.svn' entirely, can anybody suggest how I might test for the presence of said folder in the current location and all parent folders?
Many thanks.
First of all. Don't use a working copy for multiple branches/trunk. There's simply no reason for it. It usually takes less than five minutes to checkout a particular branch of a particular project. And, in this day and age of gigabyte and terabyte sized hard drives, there's just no reason to save the room. (My first hard drive was 40 megabytes. And, I use to lord over my coworkers who had mere 10 and 20 megabyte hard drives).
What little time and disk space you save will be lost the first time you accidentally used the wrong branch because you forgot that you've switched.
The best way to check to see if you're in a Subversion working copy is to run svn info and see what the exit value is. If it's not zero, you're not in a Subversion working directory.
If you really want to have the repo root (or something similar) in your prompt, I suggest a sequence like this in your prompt command:
PS1="\u#\h:\w (\$(svn info --xml 2> /dev/null | sed -n '/<relative-url>/s/<.*>\(.*\)<.*>/\1/p'))\n$ "
This function will walk up the tree from your current directory looking for a ".svn" directory:
is_svn () {
local dir=$PWD
while [[ $dir != "/" ]]; do
[[ -d "$dir/.svn" ]] && return 0
dir=$(dirname "$dir")
done
return 1
}
Then, your prompt can include something like
$( is_svn && svn info | ... )
Disclaimer: I am very new to Bash scripting (and Linux in general), so forgive me for a stupid question.
A friend of mine gave me a script which makes a backup copy of certain files onto Dropbox. Here's the code in full:
#!/bin/sh
DATE=`date +%Y-%m-%d`
tarname='backup-'$DATE'.tar.gz'
cd ~/
directoriesToBack='.bashrc Desktop/School/ Desktop/Research\ Project'
tar -X ~/Desktop/My\ Programs/scripts/crons/exclude.txt -zcvf $tarname $directoriesToBack
mv $tarname ~/Dropbox
The variable directoriesToBack obviously contains the directories to be copied. Exclude.txt is a text file of files which are not to be backed up.
If I try to run this script, I get an error because of Desktop/Research Project: my computer looks for the directory Desktop/Research instead. I've tried to use double quotes instead of single quotes, and to replace \ with an ordinary space, but these tries didn't work. Does anyone know how I can make a backup of a directory with spaces in its name?
Don't try to do this with strings. It will not work and it will cause pain. See I'm trying to put a command in a variable, but the complex cases always fail! for various details and discussion.
Use an array instead.
#!/bin/bash
DATE=$(date +%Y-%m-%d)
tarname=backup-$DATE.tar.gz
cd ~/
directoriesToBack=(.bashrc Desktop/School "Desktop/Research Project")
tar -X ~/Desktop/My\ Programs/scripts/crons/exclude.txt -zcvf "$tarname" "${directoriesToBack[#]}"
I also fixed the quoting of variables/etc. and used $() instead of backticks for the date command execution (as $() can be nested and generally has better semantics and behaviour).
Please run the script and show the EXACT error message. I suspect that what is going wrong is not what you think it is. I suspect that the envar directoriesToBack is not what you think it is.
cd Desktop/"Research Project" (With Quotation marks)
You'll find that a lot of code in many languages use Quotes to signify a space.
Much like a similar SO question, I am trying to monitor a directory on a Linux box for the addition of new files and would like to immediately process these new files when they arrive. Any ideas on the best way to implement this?
Look at inotify.
With inotify you can watch a directory for file creation.
First make sure inotify-tools in installed.
Then use them like this:
logOfChanges="/tmp/changes.log.csv" # Set your file name here.
# Lock and load
inotifywait -mrcq $DIR > "$logOfChanges" &
IN_PID=$$
# Do your stuff here
...
# Kill and analyze
kill $IN_PID
while read entry; do
# Split your CSV, but beware that file names may contain spaces too.
# Just look up how to parse CSV with bash. :)
path=...
event=...
... # Other stuff like time stamps?
# Depending on the event…
case "$event" in
SOME_EVENT) myHandlingCode path ;;
...
*) myDefaultHandlingCode path ;;
done < "$logOfChanges"
Alternatively, using --format instead of -c on inotifywait would be an idea.
Just man inotifywait and man inotifywatch for more infos.
You can also use incron and use it to call a handling script.
One solution I thought of is to create a "file listener" coupled with a cron job. I'm not crazy about this but I think it could work.
fschange (Linux File System Change Notification) is a perfect solution, but it needs to patch your kernel