run a script with $(cat filename.txt) - linux

So im running a script called backup.sh. It creates a backup of a site. Now I have a file called sites.txt that has a list if sites that I need to backup. i dont want to run the script for every site that I need to backup. So what im trying to do is run is like this:
backup.sh $(cat sites.txt)
But it only backups the 1st site thats on the list then stop. any suggestions how i could keep make it go throughout the whole list?

To iterate over the lines of a file, use a while loop with the read command.
while IFS= read -r file_name; do
backup.sh "$file_name"
done < sites.txt

The proper fix is to refactor backup.sh so that it meets your expectation to accept a list of sites on its command line. If you are not allowed to change it, you can write a simple small wrapper script.
#!/bin/sh
for site in "$#"; do
backup.sh "$site"
done
Save this as maybe backup_sites, do a chmod +x, and run it with the list of sites. (I would perhaps recommend xargs -a sites.txt over $(cat sites.txt) but both should work if the contents are one token per line.)

I think this should do, provided that sites.txt has one site per line (not tested):
xargs -L 1 backup.sh < sites.txt
If you are permitted to modify backup.sh, I would enhance it so that it accepts a list of sites, not a single one. Of course, if sites.txt, is very, very large, the xargs way would still be the better one (but then without the -L switch).

Related

Linux command select specific directory

I have only two folders under a given directory. Is there any method to choose the second directory based on the order and not on the folder name?
Example: (I want to enter under doc2)
#ls
doc1 doc2
If you really want to use ls,
cd "$(ls -d */ | sed -n '2p')"
selects enters the second directory listed by it, independently of the number of directories provided by ls.
Parsing ls output is not a good idea generally, although it will work in most cases and will cause no harm if you are just using it in your interactive shell for fast navigation. You should not use this for serious programming.
You can use the tail command to get the last line
ls |tail -1

Recursive Text Substitution and File Extension Rename

I am using an application that creates a text file on a Linux server. I then have the ability to execute a shell script (BASH 3.2.57) in which I need to convert the text file from Unix line endings to DOS and also change the extension of the file from .txt to .log.
I currently have a sed based command to do this. This command is rewritten by the application at run time to point to the specific folder and file name, in this example where you see ABC (all capital 3 letters in all my examples are a variable that can be any 3 letters).
pushd /rootfolder/parentfolder/ABC/
sed 's/$/\r/' prABC.txt > prABC.log
popd
The problem with this is that if a user runs the application for 2 different groups, say ABC and DEF at nearly the same time, the script will get overwritten with the DEF variables before ABC had a chance to fire off and do its thing with the file. Additionally the .txt is left in the folder regardless and I would like that to be removed.
A friend of mine came up with the following code that seems to work if its determined to be our best solution, but I would think and hope we have a cleaner more dynamic way to do this. Also this current method requires that when my user decides to add a GHI directory and file I now have to update the code, which i can program my application to do for me but i don't want this script to have to be rewritten every time the application wants to use it.
pushd /rootfolder/parentfolder/ABC
if [[ -f prABC.txt ]]
then
sed 's/$/\r/' prABC.txt > prABC.log
rm prABC.txt
fi
popd
pushd /rootfolder/parentfolder/DEF
if [[ -f prABC.txt ]]
then
sed 's/$/\r/' prABC.txt > prABC.log
rm prABC.txt
fi
popd
I would like to call this script at anytime from my application and it find any file named pr*.txt below the /rootfolder/parentfolder/ directory (if that has to include the parentfolder in its search that won't be a problem) and convert the line endings from LF to CRLF and change the extension of the file from .txt to .log.
I've done a ton of searching and have found near solutions for this but not exactly what I need and I want to be sure it's as safe as possible (issues with using "find with for". I don't know what utilities are installed on this build so i would like to keep it as basic/supportable as possible Thanks in advance :)
You should almost never need pushd and popd in scripts. In fact, you rarely need cd, either.
#!/bin/bash
for d in /rootfolder/parentfolder/ABC /rootfolder/parentfolder/DEF
do
if [[ -f "$d/prABC.txt" ]]
then
sed 's/$/\r/' "$d/prABC.txt" > "$d/prABC.log" &&
rm "$d/prABC.txt"
fi
done
Recall that a && b is shorthand for
if a; then
b
fi
In other words, if sed fails (because the source file can't be read, or the destination can't be written) we don't rm the source file. There should be an error message already so we don't add another one.
Not only is this more succinct, it is also easier to change if you decide that the old file should be renamed instead of removed, or you want to filter out all lines which contain "beef" in the sed script. Generally you should avoid repeated code; see also the DRY principle on Wikipedia.
Something is seriously wrong somewhere if you require DOS line endings in your files on Unix.

Bash script for moving and renaming application log files on Linux

I'm relatively new to coding on linux.
I have the below script for moving my ERP log file.
!/bin/bash #Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
_now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa_$now.log
The code runs but does not rename the log file to the date it has been moved.
I would also like to check when the file exceeds the 90M size so it moves it automatically at the end of every day. a cron job of some kind.
Help Please
After editing this is my new code.
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa$now.log
I wish to add code to check if hansa.log file is over 90M then move it. If it is not then leave it as it is.
cd /u find. -name '*hansa.log*' -size +90000k -exec mv '{}' /u/HansaLogs\;
In addition to the other comments, there are a few other things to consider. tgo's logrotate suggestion is a good one. In Linux, if you are every stuck on the use of a utility, etc.. the man files (while a bit cryptic at first), provide concise usage information. To see the logs available for a given utility, use man -k name (some distributions provide this selection capability by default alias) e.g.:
$ man -k logrotate
logrotate (8) - rotates, compresses, and mails system logs
logrotate.conf (5) - rotates, compresses, and mails system logs
Then if you want the logrotate page:
$ man 8 logrotate
or the conf page
$ man 5 logrotate.conf
There are several things you may want to change/consider regarding your script. First, while there is nothing wrong with a variable now, you may run into confusion with the date command's builtin use of now. There is no conflict, but it would look strange to write now=$(date -d "now + 24 hours" "+%F %T"). (recommend a name like tstamp, short for timestamp instead).
For maintainability, readability, etc... you may consider assiging your path components to variables that will help with readability later on. (example below).
Finally, before moving, copying, deleting, etc... it is always a good idea to validate that the target file exists and to provide an error message if something is out of whack. A rewrite could be:
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
if [ -f "$logname" ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
printf "error: file not found '%s'.\n" "$logname" >&2
exit 1
fi
Note: the >&2 simply redirects the output of printf to stderr rather than stdout.
As for the find command, there is no need to cd and find ., the find command takes the path as its first argument. Additionally, the --size option has builtin support for Megabytes M. A rewrite here could look like:
find /u -name "*hansa.log*" -size +90M -exec mv '{}' /u/HansaLogs \;
All in all, it looks like you will pick up shell programming without any problem. Just develop good habits early, they will save you a lot of grief later.
Hi Guys Thanx for the help. So far I have come up with this code. I am stuck at creating a cron job to run this periodically say after every 22hrs
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to Check if log file exists before moving:
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
minimumsize=90000
actualsize=$(wc -c <"$logname")
if [ $actualsize -ge $minimumsize ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
echo size is under $minimumsize bytes
exit 1
fi

RH Linux Bash Script help. Need to move files with specific words in the file

I have a RedHat linux box and I had written a script in the past to move files from one location to another with a specific text in the body of the file.
I typically only write scripts once a year so every year I forget more and more... That being said,
Last year I wrote this script and used it and it worked.
For some reason, I can not get it to work today and I know it's a simple issue and I shouldn't even be asking for help but for some reason I'm just not looking at it correctly today.
Here is the script.
ls -1 /var/text.old | while read file
do
grep -q "to.move" $file && mv $file /var/text.old/TBD
done
I'm listing all the files inside the /var/text.old directory.
I'm reading each file
then I'm grep'ing for "to.move" and holing the results
then I'm moving the resulting found files to the folder /var/text.old/TBD
I am an admin and I have rights to the above files and folders.
I can see the data in each file
I can mv them manually
I have use pwd to grab the correct spelling of the directory.
If anyone can just help me to see what the heck I'm missing here that would really make my day.
Thanks in advance.
UPDATE:
The files I need to move do not have Whitespaces.
The Error I'm getting is as follows:
grep: 9829563.msg: No such file or directory
NOTE: the file "982953.msg" is one of the files I need to move.
Also note: I'm getting this error for every file in the directory that I'm listing.
You didn't post any error, but I'm gonna take a guess and say that you have a filename with a space or special shell character.
Let's say you have 3 files, and ls -1 gives us:
hello
world
hey there
Now, while splits on the value of the special $IFS variable, which is set to <space><tab><newline> by default.
So instead of looping of 3 values like you expect (hello, world, and hey there), you loop over 4 values (hello, world, hey, and there).
To fix this, we can do 2 things:
Set IFS to only a newline:
IFS="
"
ls -1 /var/text.old | while read file
...
In general, I like setting IFS to a newline at the start of the script, since I consider this to be slightly "safer", but opinions on this probably vary.
But much better is to not parse the output of ls, and use for:
for file in /var/text.old/*`; do
This won't fork any external processes (piping to ls to while starts 2), and behaves "less surprising" in other ways. See here for some examples.
The second problem is that you're not quoting $file. You should always quote pathnames with double quoted: "$file" for the same reasons. If $file has a space (or a special shell character, such as *, the meaning of your command changes:
file=hey\ *
mv $file /var/text.old/TBD
Becomes:
mv hey * /var/text.old/TBD
Which is obviously very different from what you intended! What you intended was:
mv "hey *" /var/text.old/TBD

How to directly overwrite with 'unexpand' (spaces-to-tabs conversion)?

I'm trying to use something along the lines of
unexpand -t 4 *.php
but am unsure how to write this command to do what I want.
Weirdly,
unexpand -t 4 file.php > file.php
gives me an empty file. (i.e. overwriting file.php with nothing)
I can specify multiple files okay, but don't know how to then overwrite each file.
I could use my IDE, but there are ~67000 instances of to be replaced over 200 files, and this will take a while.
I expect that the answers to my question(s) will be standard unix fare, but I'm still learning...
You can very seldom use output redirection to replace the input. Replacing works with commands that support it internally (since they then do the basic steps themselves). From the shell level, it's far better to work in two steps, like so:
Do the operation on foo, creating foo.tmp
Move (rename) foo.tmp to foo, overwriting the original
This will be fast. It will require a bit more disk space, but if you do both steps before continuing to the next file, you will only need as much extra space as the largest single file, this should not be a problem.
Sketch script:
for a in *.php
do
unexpand -t 4 $a >$a-notab
mv $a-notab $a
done
You could do better (error-checking, and so on), but that is the basic outline.
Here's the command I used:
for p in $(find . -iname "*.js")
do
unexpand -t 4 $(dirname $p)/"$(basename $p)" > $(dirname $p)/"$(basename $p)-tab"
mv $(dirname $p)/"$(basename $p)-tab" $(dirname $p)/"$(basename $p)"
done
This version changes all files within the directory hierarchy rooted at the current working directory.
In my case, I only wanted to make this change to .js files; you can omit the iname clause from find if you wish, or use different args to cast your net differently.
My version wraps filenames in quotes, but it doesn't use quotes around 'interesting' directory names that appear in the paths of matching files.
To get it all on one line, add a semi after lines 1, 3, & 4.
This is potentially dangerous, so make a backup or use git before running the command. If you're using git, you can verify that only whitespace was changed with git diff -w.

Resources