i have a variable of filenames that end with a vowel. I need to delete all of these files at once. I have tried using
rm "$vowels"
but that only seems to return the files within the variable and state that there is "No such file or Directory"
Its your use of quotes: they tell rm that your variables contents are to be interpreted as a single argument (filename). Without quotes the contents will be broken into multiple arguments using the shell rules in effect.
Be aware that this can be risky if your filenames contain spaces - as theres no way to tell the difference between spaces between filenames, and spaces IN filenames.
You can get around this by using an array instead and using quoted array expansion (which I cant remember the syntax of, but might look something like rm "${array[#]}" - where each element in the array will be output as a quoted string).
SOLUTION
assigning the variable
vowel=$(find . -type f | grep "[aeiou]$")
removing all files within variable
echo $vowel | xargs rm -v
Related
I have set of files in /home/user/source. One file in this set has name 'e e' (with single quotes). When I tried to loop over this set of files and print all filenames I got this file with name e e (quotes disappered). How write this loop to save this quotes in output? Here is the code:
#!/bin/bash
for existedFile in "$(ls /home/user/source)"
do
echo $existedFile
done
Confused moment here is when I just use ls /home/user/source output is correct.
Don't Parse the Output of ls
The output of ls can contain anything. It can contain whitespace, newlines, commas, pipe symbols, etc. This can be extremely harmful in a script.
Instead, iterate over a glob: *. An asterisk is shorthand for "everything in this directory". Bash will take care of the iteration over files. If you need to match a particular file type/pattern, you can use *.java, file*.f90, etc.
I'm creating a PATH environment variable that begins with /foo and ends with /bar; and in between I "sandwich" in a collection of related bin paths that I programmatically concatenate together, as shown:
export PATH=/foo:$(ls -1d /path/to/*/bin | xargs | sed -e 's/ /:/'g):/bar
The issue I'm having is that the in-between collection of paths are interpreted as one giant string, rather than as tokenized paths separated by a colon (":").
I tried variations in a attempt to get a tokenized result, such as:
export PATH=/foo:$(echo $(ls -1d /path/to/*/bin | xargs | sed -e 's/ /:/'g)):/bar
but no luck. I also tried variants that include an eval(1), and so on. I suppose I can next try using find(1) and friends.
What does work is pasting in the literal long string, but that's a bummer because it's not programmatic.
Incidentally, this issue isn't specific to PATH because substituting in export BLAH=... results in the same issue.
Any ideas for modifying the above so that the result is tokenized? You can try this yourself by creating, say, user$ mkdir -p /tmp/{1,2,3,4}/bin and playing around.
Whatever the solution is, it's just not coming to me today. =:)
I would like to find some file types in pictures folder and I have created the following bash-script in /home/user/pictures folder:
for i in *.pdf *.sh *.txt;
do
echo 'all file types with extension' $i;
find /home/user/pictures -type f -iname $i;
done
But when I execute the bash-script, it does not work as expected for files that are located on the base directory /home/user/pictures. Instead of echo 'All File types with Extension *.sh' the command interprets the variable for base directory:
all file types with extension file1.sh
/home/user/pictures/file1.sh
all file types with extension file2.sh
/home/user/pictures/file2.sh
all file types with extension file3.sh
/home/user/pictures/file3.sh
I would like to know why echo - command does not print "All File types with Extension *.sh".
Revised code:
for i in '*.pdf' '*.sh' '*.txt'
do
echo "all file types with extension $i"
find /home/user/pictures -type f -iname "$i"
done
Explanation:
In bash, a string containing *, or a variable which expands to such a string, may be expanded as a glob pattern unless that string is protected from glob expansion by putting it inside quotes (although if the glob pattern does not match any files, then the original glob pattern will remain after attempted expansion).
In this case, it is not wanted for the glob expansion to happen - the string containing the * needs to be passed as a literal to each of the echo and the find commands. So the $i should be enclosed in double quotes - these will allow the variable expansion from $i, but the subsequent wildcard expansion will not occur. (If single quotes, i.e. '$i' were used instead, then a literal $i would be passed to echo and to find, which is not wanted either.)
In addition to this, the initial for line needs to use quotes to protect against wildcard expansion in the event that any files matching any of the glob patterns exist in the current directory. Here, it does not matter whether single or double quotes are used.
Separately, the revised code here also removes some unnecessary semicolons. Semicolons in bash are a command separator and are not needed merely to terminate a statement (as in C etc).
Observed behaviour with original code
What seems to be happening here is that one of the patterns used in the initial for statement is matching files in the current directory (specifically the *.sh is matching file1.sh file2.sh, and file3.sh). It is therefore being replaced by a list of these filenames (file1.sh file2.sh file3.sh) in the expression, and the for statement will iterate over these values. (Note that the current directory might not be the same as either where the script is located or the top level directory used for the find.)
It would also still be expected that the *.pdf and *.txt would be used in the expression -- either substituted or not, depending on whether any matches are found. Therefore the output shown in the question is probably not the whole output of the script.
Such expressions (*.blabla) changes the value of $i in the loop. Here is the trick i would do :
for i in pdf sh txt;
do
echo 'all file types with extension *.'$i;
find /home/user/pictures -type f -iname '*.'$i;
done
TL;DR:
Why isn't invoking ./myscript foo* when myscript has var=$1 the same as invoking ./myscript with var=foo* hardcoded?
Longer form
I've come across a weird issue in a bash script I'm writing. I am sure there is a simple explanation, but I can't figure it out.
I am trying to pass a command line argument to be assigned as a variable in the script.
I want the script to allow 2 command line arguments as follows:
$ bash my_bash_script.bash args1 args2
In my script, I assigned variables like this:
ARGS1=$1
ARGS2=$2
Args 1 is a string descriptor to add to the output file.
Args 2 is a group of directories: "dir1, dir2, dir3", which I am passing as dir*
When I assign dir* to ARGS2 in the script it works fine, but when I pass dir* as the second command line argument, it only includes dir1 in the wildcard expansion of dir*.
I assume this has something to do with how the shell handles wildcards (even when passed as args), but I don't really understand it.
Any help would be appreciated.
Environment / Usage
I have a group of directories:
dir_1_y_map, dir_1_x_map, dir_2_y_map, dir_2_x_map,
... dir_10_y_map, dir_10_x_map...
Inside these directories I am trying to access a file with extension ".status" via *.status, and ".report.txt" via *report.txt.
I want to pass dir_*_map as the second argument to the script and store it in the variable ARGS2, then use it to search within each of the directories for the ".status" and ".report" files.
The issue is that passing dir_*_map from the command line doesn't give the list of directories, but rather just the first item in the list. If I assign the variable ARGS2=dir_*_map within the script, it works as I intend.
Workaround: Quoting
It turns out that passing the second argument in quotes allowed the wildcard expansion to work appropriately for "dir_*_map"
#!/usr/bin/env bash
ARGS1=$1
ARGS2=$2
touch $ARGS1".extension"
for i in /$ARGS2/*.status
do
grep -e "string" $i >> $ARGS1".extension"
done
Here is an example invocation of the script:
sh ~/path/to/script descriptor "dir_*_map"
I don't fully understand when/why some arguments must be passed in quotes, but I assume it has to do with the wildcard expansion in the for loop.
Addressing the "why"
Assignments, as in var=foo*, don't expand globs -- that is, when you run var=foo*, the literal string foo* is put into the variable foo, not the list of files matching foo*.
By contrast, unquoted use of foo* on a command line expands the glob, replacing it with a list of individual names, each of which is passed as a separate argument.
Thus, running ./yourscript foo* doesn't pass foo* as $1 unless no files matching that glob expression exist; instead, it becomes something like ./yourscript foo01 foo02 foo03, with each argument in a different spot on the command line.
The reason running ./yourscript "foo*" functions as a workaround is the unquoted expansion inside the script allowing the glob to be expanded at that later time. However, this is bad practice: glob expansion happens concurrent with string-splitting (meaning that relying on this behavior removes your ability to pass filenames containing characters found in IFS, typically whitespace), and also means that you can't pass literal filenames when they could also be interpreted as globs (if you have a file named [1] and a file named 1, passing [1] would always be replaced with 1).
Idiomatic Usage
The idiomatic way to build this would be to shift away the first argument, and then iterate over subsequent ones, like so:
#!/bin/bash
out_base=$1; shift
shopt -s nullglob # avoid generating an error if a directory has no .status
for dir; do # iterate over directories passed in $2, $3, etc
for file in "$dir"/*.status; do # iterate over files ending in .status within those
grep -e "string" "$file" # match a single file
done
done >"${out_base}.extension"
If you have many .status files in a single directory, all this can be made more efficient by using find to invoke grep with as many arguments as possible, rather than calling grep individually on a per-file basis:
#!/bin/bash
out_base=$1; shift
find "$#" -maxdepth 1 -type f -name '*.status' \
-exec grep -h -- /dev/null '{}' + \
>"${out_base}.extension"
Both scripts above expect the globs passed not to be quoted on the invoking shell. Thus, usage is of the form:
# being unquoted, this expands the glob into a series of separate arguments
your_script descriptor dir_*_map
This is considerably better practice than passing globs to your script (which then is required to expand them to retrieve the actual files to use); it works correctly with filenames containing whitespace (which the other practice doesn't), and files whose names are themselves glob expressions.
Some other points of note:
Always put double quotes around expansions! Failing to do so results in the additional steps of string-splitting and glob expansion (in that order) being applied. If you want globbing, as in the case of "$dir"/*.status, then end the quotes before the glob expression starts.
for dir; do is precisely equivalent to for dir in "$#"; do, which iterates over arguments. Don't make the mistake of using for dir in $*; do or for dir in $#; do instead! These latter invocations combine each element of the list with the first character of IFS (which, by default, contains the space, the tab and the newline in that order), then splits the resulting string on any IFS characters found within, then expands each component of the resulting list as a glob.
Passing /dev/null as an argument to grep is a safety measure: It ensures that you don't have different behavior between the single-argument and multi-argument cases (as an example, grep defaults to printing filenames within output only when passed multiple arguments), and ensures that you can't have grep hang trying to read from stdin if it's passed no additional filenames at all (which find won't do here, but xargs can).
Using lower-case names for your own variables (as opposed to system- and shell-provided variables, which have all-uppercase names) is in accordance with POSIX-specified convention; see fourth paragraph of the POSIX specification regarding environment variables, keeping in mind that environment variables and shell variables share a namespace.
I have a directory with many files with really long, repetitive names and I would like to remove the first 16 characters from each file name.
So I would like to rename files like this:
0123456789012345file1.fits
0123456789012345file2.fits
to this:
file1.fits
file2.fits
I would like to be able to do this from the command line in the terminal.
In bash, you can run
for f in *; do mv "$f" "${f:16}"; done
to rename all files stripping off the first 16 characters of the name.
You can change the * to a more restrictive pattern such as *.fits if you don't want to rename all files in the current directory. The quotes around the parameters to mv are necessary if any filenames contain whitespace.
bash's ${var:pos:len} syntax also supports more advanced usage than the above. You can take only the first five characters with ${f::5}, or the first five characters after removing the first 16 characters with ${f:16:5}. Many other variable substitution expressions are available in bash; see a reference such as TLDP's Bash Parameter Substitution for more information.