creating multiple copies of a file in bash with a script - linux

I am starting to learn how to use bash shell commands and scripting in Linux.
I want to create a script that will take a source file, and create a chosen number of named copies.
for example, I have the source as testFile, and I choose 15 copies, so it creates testFile1, 2, 3 ... 14, 15 in the same location.
To try and achieve this I have tried to make the following command:
for LABEL in {$X..$Y}; do cp $INPUT $INPUT$LABEL; done
However, instead of creating files from X to Y, it makes just one file with (for example) {1..5} appended instead of files 1, 2, 3, 4 and 5
How can I change it so it properly uses the variable as a number for the loop?

The brace expansion mechanism is a bit limited; it doesn't work with variables, only literals.
For what you want, you probably have the seq command, and could write:
INPUT=testFile
for num in $(seq 1 15)
do
cp "$INPUT" "$INPUT$num"
done

Using a C-style for loop :
$ x=0 y=15
$ for ((i=x; i<=y; i++)); do cp "$INPUT" "$INPUT$i"; done

Related

How can I stop my script to overwrite existing files

I am learning bash since 6 days I think I got some of the basics.
Anyway, for the wallpapers downloaded from Variety I've written two scripts. One of them moves downloaded photos older than 12 days to a folder and renames them all as "Aday 1,2,3..." and the other lets me select these and moves them to another folder and removes photos I didn't select. 1st script works just as I intended, my question is about the other
I think I should write the script down to better explain my problem
Script:
#!/bin/bash
#Move victors of 'Seçme-Eleme' to 'Kazananlar'
cd /home/eurydice/Bulunur\ Bir\ Şeyler/Dosyamsılar/Seçme-Eleme
echo "Select victors"
read vct
for i in $vct; do
mv -i "Aday $i.png" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $RANDOM.png" ;
mv -i "Aday $i.jpg" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $RANDOM.jpg" ;
done
#Now let's remove the rest
rm /home/eurydice/Bulunur\ Bir\ Şeyler/Dosyamsılar/Seçme-Eleme/*
In this script I originally intended to define another variable (let's call this "n") and so did I with copying and changing the variable from the first script. It was something like that
for i in $vct; do
n=1
mv "Aday $i.png" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $n.png" ;
mv "Aday $i.jpg" /home/eurydice/"Bulunur Bir Şeyler"/Dosyamsılar/Kazananlar/"Bahar $n.jpg" ;
n=$((n+1))
done
When I do that for the first time the script worked just as I intended. However, in my 2nd test run this script overwrote the files that already existed. I mean, for example in 1st run i had 5 files whose names are "Bahar 1,2,3,4,5" and the 2nd time I chose 3 files to add. I wanted their names to be "Bahar 6,7,8" but instead, my script made them the new 1,2 and 3. I tried many solutions and when I couldn't fix that I just assigned random numbers to them.
Is there a way to make this script work as I intended?
This command finds the biggest file name number amongst files in current directory. If no file is found, biggest number is assigned to 0.
biggest_number=$(ls -1 | sed -n 's/^[^0-9]*\([0-9]\+\)\(\.[a-zA-Z]\+\)\?$/\1/p' | sort -r -g | head -n 1)
[[ ! -z "$biggest_number" ]] || biggest_number=0
The regex in sed command assumes that there is no digit in filenames before the trailing number intended for increment.
As soon as you have found the biggest number, you can use it to start your loop to prevent overwrites.
n=$((biggest_number+1))

Is it possible to make a list of disk in bash?

I'm a beginner and not a native english speaker please excuse my clumsiness.
I'm trying to make a linux install script for personal use (and to learn more about linux and bash scripting) but I'm struggling on finding a way to create a disk selection menu :
I wish to make a list witch would look like that :
NAME SIZE DEVICES
sda 256gib intel-ssdx
sdb 1000gib TLxxxxxxxx
nvme0n1 128gib WDxxxxxxxx
So far i've tried to echo fdisk -l and lsblk in text file and use cat to prompt it
Code :
lsblk
Set DiskLayout=("Automatic Install" "Manual Install" "Check pending change" "Quit")
select DiskLayoutopt in "${DiskLayout[#]}"
do
case $DiskLayoutopt in
"Automatic Install")
read Sdsk -p "Select drive"
;;
"Manual Install")
parted -a optimal
;;
"Check pending change")
echo ""
"Quit")
exit 1
;;
*) echo "invalid option $REPLY";;
esac
done
The following code will get your menu:
#!/usr/bin/env bash
disk=()
size=()
name=()
while IFS= read -r -d $'\0' device; do
device=${device/\/dev\//}
disk+=($device)
name+=("`cat "/sys/class/block/$device/device/model"`")
size+=("`cat "/sys/class/block/$device/size"`")
done < <(find "/dev/" -regex '/dev/sd[a-z]\|/dev/vd[a-z]\|/dev/hd[a-z]' -print0)
for i in `seq 0 $((${#disk[#]}-1))`; do
echo -e "${disk[$i]}\t${name[$i]}\t${size[$i]}"
done
This is some tough bash script... Hope you'll learn quick.
Here's some help:
First line is a shebang to tell your system which interpreter is needed for that script. Indeed, this script only works with bash.
Try running with bash myscript.sh on systems that don't work (ie BSD).
variable=() is an array.
Adding something to that array is done by variable+=("my value")
The while loop reads variable device from what it gets from find command
while read device; do
something
done < <(find)
The find command uses a regular expression that says anything like /dev/sdX where X goes from a to z, or anything like /dev/vdX or anything like /dev/hdX (where X still goes from a to z).
The or operator is a pipe | which has to be escaped with an antislash, hence giving \|.
The devices read by the while look look like '/dev/sda' so we need so strip '/dev/' out of it using the following:
device=${device/\/dev\//}
This is a bash substitution which works the following way:
variable="my foo function"
echo ${variable/foo/bar}
This outputs my bar function.
Indeed, we still need to escape / since this is the separator character for the substition, so it becomes \/.
Getting the disk name via
"`cat "/sys/class/block/$device/device/model"`"
cat "/sys/class/block/sda/device/model" gives the disk model.
In order to get the result into a variable, we'll need to quote it with ` sign, eg:
myvar=`cat /var/file`
Last but not least, the for loop part:
for i in seq 0 $((${#disk[#]}-1)); do
echo -e "${disk[$i]}\t${name[$i]}\t${size[$i]}"
done
${#disk[#]} is the number of elements in array disk.
Actually ${#var} is the number of elements in var, which when being a string, is the number of characters. ${var[#]} means all elements of an array.
seq 0 X returns a sequence of 0 to X numbers, in order to construct the for loop.
Using echo -e translates escaped characters into litterals. In our case '\t' become tabs.
Last but not least, showing ${disk[$i]} is disk array value of index $i where $i is an integer.
Btw, bash is quite limited to do these tasks, but really fun to learn in the first place.
Harder tasks might be better accomplished in a higher level scripting language like Python. Anyway, have fun learning bash, it's a life saver in sysadmin's career.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Difference between `exec n<&0 < file` and `exec n<file` commands and some general questions regarding exec command

As I am a newbie in shell scripting, exec command always confuses me and while exploring this topic with while loop had triggered following 4 questions:
What is the difference between the below syntax 1 and 2
syntax 1:
while read LINE
do
: # manipulate file here
done < file
syntax 2:
exec n<&0 < file
while read LINE
do
: # manipulate file here
done
exec 0<&n n<&-
Kindly elaborate the operation of exec n<&0 < file lucidly
Is this exec n<&0 < file command equivalent to exec n<file ? (if not then what's the difference between two?)
I had read some where that in Bourne shell and older versions of ksh, a problem with the while loop is that it is executed in a subshell. This means that any changes to the script environment,such as exporting variables and changing the current working directory, might not be present after the while loop completes.
As an example, consider the following script:
#!/bin/sh
if [ -f “$1” ] ; then
i=0
while read LINE
do
i=`expr $i + 1`
done < “$1”
echo $i
fi
This script tries to count the number of lines in the file specified to it as an argument.
On executing this script on the file
$ cat dirs.txt
/tmp
/usr/local
/opt/bin
/var
can produce the following incorrect result:
0
Although you are incrementing the value of $i using the command
i=expr $i + 1
when the while loop completes, the value of $i is not preserved.
In this case, you need to change a variable’s value inside the while loop and then use that value outside the loop.
One way to solve this problem is to redirect STDIN prior to entering the loop and then restore STDIN after the loop completes.
The basic syntax is
exec n<&0 < file
while read LINE
do
: # manipulate file here
done
exec 0<&n n<&-
My question here is:
In Bourne shell and older versions of ksh,ALTHOUGH WHILE LOOP IS EXECUTED IN SUBSHELL, how this exec command here helps in retaining variable value even after while loop completion i.e. how does here exec command accomplishes the task change a variable’s value inside the while loop and then use that value outside the loop.
The difference should be nothing in modern shells (both should be POSIX compatible), with some caveats:
There are likely thousands of unique shell binaries in use, some of which are missing common features or simply buggy.
Only the first version will behave as expected in an interactive shell, since the shell will close as soon as standard input gets EOF, which will happen once it finishes reading file.
The while loop reads from FD 0 in both cases, making the exec pointless if the shell supports < redirection to while loops. To read from FD 9 you have to use done <&9 (POSIX) or read -u 9 (in Bash).
exec (in this case, see help exec/man exec) applies the redirections following it to the current shell, and they are applied left-to-right. For example, exec 9<&0 < file points FD 9 to where FD 0 (standard input) is currently pointing, effectively making FD 9 a "copy" of FD 0. After that file is sent to standard input (both FD 0 and 9).
Run a shell within a shell to see the difference between the two (commented to explain):
$ echo foo > file
$ exec "$SHELL"
$ exec 9<&0 < file
$ foo # The contents of the file is executed in the shell
bash: foo: command not found
$ exit # Because the end of file, equivalent to pressing Ctrl-d
$ exec "$SHELL"
$ exec 9< file # Nothing happens, simply sends file to FD 9
This is a common misconception about *nix shells: Variables declared in subshells (such as created by while) are not available to the parent shell. This is by design, not a bug. Many other answers here and on USE refer to this.
So many questions... but all of them seem variants of the same one, so I'll go on...
exec without a command is used to do redirection in the current process. That is, it changes the files attached to different file descriptors (FD).
Question #1
I think it should be this way. In may system the {} are mandadory:
exec {n}<&0 < file
This line dups FD 0 (standard input) and stores the new FD into the n variable. Then it attaches file to the standard input.
while read LINE ; do ... done
This line reads lines into variable LINE from the standard input, that will be file.
exec 0<&n {n}<&-
And this line dups back the FD from n into 0 (the original standard input), that automatically closes file and then closes n (the dupped original stdin).
The other syntax:
while read LINE; do ... done < file
does the same, but in a less convoluted way.
Question #2
exec {n}<&0 < file
These are redirections, and they are executed left to right. The first one n<&0 does a dup(0) (see man dup) and stores the result new FD in variable n. Then the <file does a open("file"...) and assigns it to the FD 0.
Question #3
No. exec {n}<file opens the file and assigns the new FD to variable n, leaving the standard input (FD 0) untouched.
Question #4
I don't know about older versions of ksh, but the usual problem is when doing a pipe.
grep whatever | while read LINE; do ... done
Then the while command is run in a subshell. The same is true if it is to the left of the pipe.
while read LINE ; do ... done | grep whatever
But for simple redirects there is no subshell:
while read LINE ; do ... done < aaa > bbb
Extra unnumbered question
About your example script, it works for me once I've changed the typographic quotes to normal double quotes ;-):
#!/bin/sh
if [ -f "$1" ] ; then
i=0
while read LINE
do
i=`expr $i + 1`
done < "$1"
echo $i
fi
For example, if the file is test:
$ ./test test
9
And about your latest question, the subshell is not created by while but by the pipe | or maybe in older versions of ksh, by the redirection <. What the exec trick does is to prevent that redirection so no subshell is created.
Let me answer your questions out-of-order.
Q #2
The command exec n<&0 < file is not valid syntax. Probably the n stands for "some arbitrary number". That said, for example
exec 3<&0 < file
executes two redirections, in sequence: it duplicates/copies the standard input file descriptor, which is 0, as file descriptor 3. Next, it "redirects" file descriptor 0 to read from file file.
Later, the command
exec 0<&3 3<&-
first copies back the standard input file descriptor from the saved file descriptor 3, redirecting standard input back to its previous source. Then it closes file descriptor 3, which has served its purpose to backup the initial stdin.
Q #1
Effectively, the two examples do the same: they temporarily redirect stdin within the scope of the while loop.
Q #3
Nope: exec 3<filename opens file filename using file descriptor 3. exec 3<&0 <filename I described in #2.
Q #4
I guess those older shells mentioned effectively executed
while ...; do ... ; done < filename
as
cat filename | while ...
thereby executing the while loop in a subshell.
Doing the redirections beforehand with those laborious exec commands avoids the redirection of the while block, and thereby the implicit sub-shell.
However, I never heard of that weird behavior, and I guess you won't have to deal with it unless you're working with truly ancient shells.

Using two variables in a loop to access two diff file types in the same folder shell scripting linux

I am trying to call two diff files types in a loop.
I have a1.in-a10.in files and b1.out-b10.out files.
I wanna access both files simultaneously. I dont want to use nested loops but simultaneously.
for f1,f2 `ls *.in` `ls *.out`;do
echo "$f1 $f2"
done
I get f1 and f2 not valid identified error
You can process this with essentially the same command as you did with your last question. Just remove the extra arguments and the Java command.
for num in $(seq 1 10);
do echo a$num.in b$num.out; # processing command here
done;
One way is this (here assuming bash):
$ touch a{1..10}.in b{1..10}.in
$ ls
a10.in a2.in a4.in a6.in a8.in b10.in b2.in b4.in b6.in b8.in
a1.in a3.in a5.in a7.in a9.in b1.in b3.in b5.in b7.in b9.in
$ for i in {1..10}; do echo a$i.in b$i.in; done
a1.in b1.in
a2.in b2.in
a3.in b3.in
a4.in b4.in
a5.in b5.in
a6.in b6.in
a7.in b7.in
a8.in b8.in
a9.in b9.in
a10.in b10.in
Here I'm just echoing the strings but you can use any command you like, diff, cat, etc instead of echo

Resources