I have a script with the name bckp2 and a text file with the name variables.txt. When I am running the script like that in the terminal ./bckp2 it works exactly as I want, but when I try to run it with the at like this at -f bckp2 19:30 it doesn't work, it doesn't do the same thing. With the at command it seems like it doesn't read from the file and I have no idea why.
script bckp2 Reads 3 words from the text file variables.txt and is using them as source and destination for tar command. Just ignore the first word it's for testing
let count=0
while read line; do
for word in $line; do
if [ $count -eq 0 ]
then
username=$word
elif [ $count -eq 1 ]
then
source=$word
elif [ $count -eq 2 ]
then
destination=$word
fi
let count=count+1
done
done < variables.txt
echo $destination $source > test.txt
tar -cvf $destination.tar $source
Text file variables.txt
valkon fake faketar
In the script bckp2 I am saving the file content into variables, only to see if the at works or not, and it doesn't. It's writing to file blank text, so I assume that the script doesn't read from file at all. But as I told you, when I am running it like ./bckp2 it works.
at and batch default to interpreting commands with /bin/sh, not /bin/bash. If you want to use a different shell, you need to format your script appropriately, by adding #!/bin/bash as the first line of the script.
Also, your unnecessarily complicated loop for reading variables could be simply replaced with
read username source destination < variables.txt
Related
I have written a small script that will take the users input and then generate the md5sum values for it
count = 0
echo "Enter number of records"
read number
while [ $count -le $number ]
do
echo "Enter path"
read path
echo "file name"
read file_name
md5sum $path"/"$filename #it shows the md5sum value and path+filename
((count++))
done
How can I pass these values ( path,file name, and md5sums ) to CSV file. ( assuming the user chooses to enter more than 1 record)
The output should be like
/c/training,sample.txt,34234435345346549862123454651324 #placeholder values
/c/file,text.sh,4534534534534534345345435342342
Interactively prompting for the number of files to process is just obnoxious. Change the script so it accepts the files you want to process as command-line arguments.
#!/bin/sh
md5sum "$#" |
sed 's%^\([0-9a-f]*\) \(\(.*\)/\)?\([^/]*\)$%\3,\4,\1%'
There are no Bash-only constructs here, so I switched the shebang to /bin/sh; obviously, you are still free to use Bash if you like.
There is a reason md5sum prints the checksum before the path name. The reordered output will be ambiguous if you have file names which contain commas (or newlines, for that matter). Using CSV format is actually probably something you should avoid if you can; Unix tools generally work better with simpler formats like tab-delimited (which of course also breaks if you have file names with tabs in them).
Rather than prompting the user for both a path to a directory and the name of a file in that directory, you could prompt for a full path to the file. You can then extract what you need from that path using bash string manipulations.
#!/bin/bash
set -euo pipefail
function calc_md5() {
local path="${1}"
if [[ -f "${path}" ]] ; then
echo "${path%/*}, ${path##*/}, $(md5sum ${path} | awk '{ print $1 }')"
else
echo "
x - Script requires path to file.
Usage: $0 /path/to/file.txt
"
exit 1
fi
}
calc_md5 "$#"
Usage example:
$ ./script.sh /tmp/test/foo.txt
/tmp/test, foo.txt, b05403212c66bdc8ccc597fedf6cd5fe
I am trying to write a small bash script using mac OS TextEdit that will catch a user input in terminal and based on the file type stipulated in the command line albeit (jpg or gif) iterate through a directory on my desktop named dir1, pull all files of that filetype and place such in a new directory called dir2
i.e The user types jpg into the terminal and the script kicks into life and pulls all of the jpg files situated in dir1 and places such in dir2
What is the leanest and least convoluted way of achieving this conscious that I am new to shell scripting.
I am about to reach for the meds. What can I do to the below code to get it to work.
#!/bin/bash
echo “Good Morning, Please enter your file type for sorting [ENTER]:”
read $FILE
if [[ $file == *.jpg ]]; then
mv ~/DIR1/*jpg* ~/Users/christopherdorman/desktop/dir2/
echo “your files have been successfully processed”
fi
There are a couple confusions here regarding bash variables and syntax. You need to use fi to close your if statement instead of done. Also, you need to capitalize the variable in your if statement, since bash is case sensitive. I believe this is what you are looking for (assuming your input is "jpg" or "gif"):
#!/bin/bash
echo “Good Morning, Please enter your file type for sorting [ENTER]:”
read FILE
if [[ $FILE == "jpg" ]]; then
mv ~/DIR1/*jpg* ~/Users/christopherdorman/desktop/dir2/
echo “your files have been successfully processed”
fi
I am new to bash and writing a script to read variables that is stored on each line of a text file (there are thousands of these variables). So I tried to write a script that would read the lines and automatically output the solution to the screen and save into another text file.
./reader.sh > solution.text
The problem I encounter is currently I have only 1 variable store in the Sheetone.txt for testing purpose which should take about 2 seconds to output everything but it is stuck in the while loop as well as is not outputting the solution.
#!/bin/bash
file=Sheetone.txt
while IFS= read -r line
do
echo sh /usr/local/test/bin/test -ID $line -I
done
As indicated in the comments, you need to provide "something" to your while loop. The while construct is written in a way that will execute with a condition; if a file is given, it will proceed until the read exhausts.
#!/bin/bash
file=Sheetone.txt
while IFS= read -r line
do
echo sh /usr/local/test/bin/test -ID $line -I
done < "$file"
# -----^^^^^^^ a file!
Otherwise, it was like cycling without wheels...
I am very new in programing scripts-.
I have a lot of zip files in a directory. I want to extract them replacing the name of the inside file by the zip file, with the correct extension. Error reporting if there is more than one file, excep if is "remora.txt" inside.
The file "remora.txt" was an ini file for the zip, and I wont use it any more, but is in a lot of my zip files.
Example 1.
ZIp file: maths.zip,
Inside it has:
- "tutorial in maths.doc"
- "remora.txt"
Action:
So the script should erase or deprease "remora.txt" and extract "tutorial in maths.doc" under the name maths.doc
Example 2.
ZIp file: geo.zip,
Inside it has:
- "excersices for geometry.doc"
- "geometry.doc"
- "remora.txt"item
Action:
It should out put "I found more than a file in geo.zip"
I am
Using linux, ubuntu 12
I have done this script, but is not working.
#!/bin/bash
#
# Linux Shell Scripting Tutorial 1.05r3, Summer-2002
#
for archive in *.zip # First I read the zip file
do
((i++))
unzip -Z1 $archive | while read line; # I read all the files in the ZIP
do
line=( ${line//,/ } )
inside[$a]=("${line[#]}") # Here I assigne the name of the file to an array
((a++))
done
If ( $a > 2) then
echo " Too much files in file $archive "
fi
If ($a <= 2)
then
if (inside[0]!= "remora.txt")
then unzip -p $archive > $(printf "%s" $archive).doc
fi
if (inside[1]!= "remora.txt")
then unzip -p $archive > $(printf "%s" $archive).doc
fi
fi
done
Try writing scripts incrementally. Instead of writing 20 statements and then trying to debug them all at once, write one statement at a time and test to make sure it works before writing the next one.
If you run e.g.
If ( $a > 2) then
echo " Too much files in file $archive "
fi
by itself, you'll see that it doesn't work. You then know more specifically what the problem is, and you can look up something like "bash if variable greater than" on Google or Stackoverflow.
Check out the bash tag wiki for more helpful tips on debugging and asking about code.
Things you'll find includes:
if has to be lower case
You need line feed or semicolon before then
To see if a variable is greater than, use [[ $a -gt 2 ]].
To see if an array element does not equal, use [[ ${inside[0]} != "remora.txt" ]]
Pipelines cause subshells. Use while read ...; do ...; done < <(somecommand) instead.
I wrote the following program in my linux bashrc
open()
{
echo enter file name
read fname
locate $fname> /home/vvajendla/Desktop/backup/loc;
cat loc
exec < /home/vvajendla/Desktop/backup/loc;
value=0
while read line
do
value=`expr $value + 1`;
echo $value
echo $line
if [ $value -le 6 ]
then
gedit $line;
else
echo too many files to open
fi
done
}
The above function searches all the directories for the file-string match and opens them using GEDIT if they are less than or equal to 6.
whenever i run this function in the terminal,it gets closed.
Can you please tell me what i can do to keep it open?
The exec causes the standard input of the calling shell to be permanently redirected from the file. Once the file closes, the shell runs out of input, and exits. I assume you import this function with source; running it standalone should work.
The usual way to write this sort of function would be to make it accept an argument, so you would invoke it like "open fnord" instead of run "open" and enter "fnord" at the prompt.
open () {
local fname
fname=$1 # notice this arrangement instead of read
local value
value=0
locate "$fname" | # notice double quotes
tee /dev/stderr | # as a superior alternative to using a temporary file
while read line
do
value=`expr $value + 1`
if [ $value -le 6 ]
then
gedit "$line" # notice double quotes
else
echo too many files to open >&2 # notice redirection to stderr
fi
done
}
The diagnostic is misleading; this code will still open the first six files, then bail with an error message at the seventh. Is that what you intend? Or should it count the number of outputs, and refuse to run if there are more than six?
If you don't care for the other improvements, the minimal fix is to remove the exec and read the while loop's input from your temporary file. (You should take care to properly clean up; if you can avoid a temporary file, that's basically always a better solution.)
while read line; do
....
done <tempfile
I would be tempted to add line numbers with nl to get rid of the unattractive expr, but this might break file names with a space at the beginning. (On the other hand, locate always produces a full path name, right?)
As an alternative, and assuming gedit can read multiple file name arguments, try this:
locate "$fname" | head -n 6 | xargs gedit
This fails to produce a warning if there are more than six files, but I would actually consider that a feature.