Is there an option to "ls" that limits filename characters? - linux

syntax question. if I have a number of subdirectories within a target dir, and I want to output the names of the subs to a text file I can easily run:
ls > filelist.txt
on the target. But say all of my subs are named with a 7 character prefix like:
JR-5426_mydir
JR-5487_mydir2
JR-5517_mydir3
...
and I just want the prefixes. Is there an option to "ls" that will only output n characters per line?

Don't use ls in any programmatic context; it should be used strictly for presentation to humans -- ParsingLs gives details on why.
On bash 4.0 or later, the below will provide a deduplicated list of filename prefixes:
declare -A prefixes_seen=( ) # create an associative array -- aka "hash" or "map"
for file in *; do # iterate over all non-hidden directory entries
prefixes_seen[${file:0:2}]=1 # add the first two chars of each as a key in the map
done
printf '%s\n' "${!prefixes_seen[#]}" # print all keys in the map separated by newlines
That said, if instead of wanting a 2-character prefix you want everything before the first -, you can write something cleaner:
declare -A prefixes_seen=( )
for file in *-*; do
prefixes_seen[${file%%-*}]=1 # "${file%%-*}" cuts off "$file" at the first dash
done
printf '%s\n' "${!prefixes_seen[#]}"
...and if you don't care about deduplication:
for file in *-*; do
printf '%s\n' "${file%%-*}"
done
...or, sticking with the two-character rule:
for file in *; do
printf '%s\n' "${file:0:2}"
done
That said -- if you're trying to Do It Right, you shouldn't be using newlines to separate lists of filename characters either, because newlines are valid inside filenames on POSIX filesystems. Think about a file named f$'\n'oobar -- that is, with a literal newline in the second character; code written carelessly would see f as one prefix and oo as a second one, from this single name. Iterating over associative-array prefixes, as done for the deduplicating answers, is safer in this case, because it doesn't rely on any delimiter character.
To demonstrate the difference -- if instead of writing
printf '%s\n' "${!prefixes_seen[#]}"
you wrote
printf '%q\n' "${!prefixes_seen[#]}"
it would emit the prefix of the hypothetical file f$'\n'oobar as
$'f\n'
instead of
f
...with an extra newline below it.
If you want to pass lists of filenames (or, as here, filename prefixes) between programs, the safe way to do it is to NUL-delimit the elements -- as NULs are the single character which can't possibly exist in a valid UNIX path. (A filename also can't contain /, but a path obviously can).
A NUL-delimited list can be written like so:
printf '%s\0' "${!prefixes_seen[#]}"
...and read back into an identical data structure on the receiving end (should the receiving code be written in bash) like so:
declare -A prefixes_seen=( )
while IFS= read -r -d '' prefix; do
prefixes_seen[$prefix]=1
done

No, you use the cut command:
ls | cut -c1-7

Related

IFS and command substitution

I am writing a shell script to read input csv files and run a java program accordingly.
#!/usr/bin/ksh
CSV_FILE=${1}
myScript="/usr/bin/java -version"
while read row
do
$myScript
IFS=$"|"
for column in $row
do
$myScript
done
done < $CSV_FILE
csv file:
a|b|c
Interestingly, $myScript outside the for loop works but the $myScript inside the for loop says "/usr/bin/java -version: not found [No such file or directory]". I have come to know that it is because I am setting IFS. If I comment IFS, and change the csv file to
a b c
It works ! I imagine the shell using the default IFS to separate the command /usr/bin/java and then apply the -version argument later. Since I changed the IFS, it is taking the entire string as a single command - or that is what I think is happening.
But this is my requirement: I have a csv file with a custom delimiter, and the command has arguments in it, separated by space. How can I do this correctly?
IFS indicates how to split the values of variables in unquoted substitutions. It applies to both $row and $myscript.
If you want to use IFS to do the splitting, which is convenient in plain sh, then you need to change the value of IFS or arrange to need the same value. In this particular case, you can easily arrange to need the same value, by defining myScript as myScript="/usr/bin/java|-version". Alternatively, you can change the value of IFS just in time. In both cases, note that an unquoted substitution doesn't just split the value using IFS, it also interprets each part as a wildcard pattern and replaces it by the list of matching file names if there are any. This means that if your CSV file contains a line like
foo|*|bar
then the row won't be foo, *, bar but foo, each file name in the current directory, bar. To process the data like this, you need to turn off with set -f. Also remember that read reads continuation lines when a line ends with a backslash, and strips leading and trailing IFS characters. Use IFS= read -r to turn off these two behaviors.
myScript="/usr/bin/java -version"
set -f
while IFS= read -r row
do
$myScript
IFS='|'
for column in $row
do
IFS=' '
$myScript
done
done
However there are better ways that avoid IFS-splitting altogether. Don't store a command in a space-separated string: it fails in complex cases, like commands that need an argument that contains a space. There are three robust ways to store a command:
Store the command in a function. This is the most natural approach. Running a command is code; you define code in a function. You can refer to the function's arguments collectively as "$#".
myScript () {
/usr/bin/java -version "$#"
}
…
myScript extra_argument_1 extra_argument_2
Store an executable command name and its arguments in an array.
myScript=(/usr/bin/java -version)
…
"${myScript[#]}" extra_argument_1 extra_argument_2
Store a shell command, i.e. something that is meant to be parsed by the shell. To evaluate the shell code in a string, use eval. Be sure to quote the argument, like any other variable expansion, to avoid premature wildcard expansion. This approach is more complex since it requires careful quoting. It's only really useful when you have to store the command in a string, for example because it comes in as a parameter to your script. Note that you can't sensibly pass extra arguments this way.
myScript='/usr/bin/java -version'
…
eval "$myScript"
Also, since you're using ksh and not plain sh, you don't need to use IFS to split the input line. Use read -A instead to directly split into an array.
#!/usr/bin/ksh
CSV_FILE=${1}
myScript=(/usr/bin/java -version)
while IFS='|' read -r -A columns
do
"${myScript[#]}"
for column in "${columns[#]}"
do
"${myScript[#]}"
done
done <"$CSV_FILE"
The simplest soultion is to avoid changing IFS and do the splitting with read -d <delimiter> like this:
#!/usr/bin/ksh
CSV_FILE=${1}
myScript="/usr/bin/java -version"
while read -A -d '|' columns
do
$myScript
for column in "${columns[#]}"
do
echo next is "$column"
$myScript
done
done < $CSV_FILE
IFS tells the shell which characters separate "words", that is, the different components of a command. So when you remove the space character from IFS and run foo bar, the script sees a single argument "foo bar" rather than "foo" and "bar".
the IFS should be placed behind of "while"
#!/usr/bin/ksh
CSV_FILE=${1}
myScript="/usr/bin/java -version"
while IFS="|" read row
do
$myScript
for column in $row
do
$myScript
done
done < $CSV_FILE

basename command confusion

Given the following command:
$(basename "/this-directory-does-not-exist/*.txt" ".txt")
it outputs not only txt files but other files as well. On the other hand if I change ".txt" to something like "gobble de gook" it returns:
*.txt
I'm confused with regard to why it returns the other extension types.
Your problem doesn't stem from basename, but from inadvertent use of the shell's pathname expansion (globbing) feature due to lack of quoting:
If you use the result of your command substitution ($(...)) unquoted:
$ echo $(basename "/this-directory-does-not-exist/*.txt" ".txt")
you effectively execute the following:
$ echo * # unquoted '*' expands to all files and folders in the current dir
because basename "/this-directory-does-not-exist/*.txt" ".txt" returns literal * (it strips the extension from filename *.txt;
the reason that the filename pattern *.txt didn't expand to an actual filename is that the shell leaves globbing patterns that don't match anything unmodified (by default).)
If you double-quote the command substitution, the problem goes away:
$ echo "$(basename "/this-directory-does-not-exist/*.txt" ".txt")" # -> *
However, even with this problem resolved, your basename command will only work correctly if the glob expands to one matching file, because the syntax form you're using only supports one filename argument.
GNU basename and BSD basename support the non-POSIX -s option, which allows for multiple file operands from which to strip the extension:
basename -s .txt "/some-dir/*.txt"
Assuming you use bash, you can put it all together robustly as follows:
#!/usr/bin/env bash
names=() # initialize result array
files=( *.txt ) # perform globbing and capture matching paths in an array
# Since the shell by default returns a pattern as-is if there are no matches,
# we test the first array item for existence; if it refers to an existing
# file or dir., we know that at least 1 match was found.
if [[ -e ${files[0]} ]]; then
# Apply the `basename` command with suffix-stripping to all matches
# and read the results robustly into an array.
# Note that just `names=( $(basename ...) )` would NOT work robustly.
readarray -t names < <(basename -s '.txt' "${files[#]}")
# Note: `readarray` requires Bash 4; in Bash 3.x, use the following:
# IFS=$'\n' read -r -d '' -a names < <(basename -s '.txt' "${files[#]}")
fi
# "${names[#]}" now contains an array of suffix-stripped basenames,
# or is empty, if no files matched.
printf '%s\n' "${names[#]}" # print names line by line
Note: The -e test comes with a tiny caveat: if there are matches and the first match is a broken symlink, the test will mistakenly conclude that there are no matches.
A more robust option is to use shopt -s nullglob to make the shell expand non-matching globs to the empty string, but note that this is a shell-global option, and it is good practice to return it to its previous value afterward, which makes that approach more cumbersome.
Try to put quotes around the whole thing, what you is globbing happening, your command becomes * which then is converted to all files in the current directory, this does not happen inside single or double quotes.

Get numeric value from file name

I am a new guy of Linux. I have a question:
I have a bunch of files in a directory, like:
abc-188_1.out
abc-188_2.out
abc-188_3.out
how can a get the number 188 from those names?
Assuming (since you are on linux and are working with files), that you will use a shell / bash-script... (If you use something different (say, python, ...), the solution will, of course, be a different one.)
... this will work
for file in `ls *`; do out=`echo "${file//[!0-9]/ }"|xargs|cut -d' ' -f1`; echo $out; done
Explanation
The basic problem is to extract a number from a string in bash script (search stackoverflow for this, you will find dozens of different solutions).
This is done in the command above as (the string from which numbers are to be extracted being saved in the variable file):
${file//[!0-9]/ }
or, without spaces
${file//[!0-9]/}
It is complicated here by two things:
Do this recursively on the contents of a directory. This is done here with a bash for loop (note that the variable file takes as value the name of each of the files on the current working directory, one after another)
for file in ls *; do (commands you want done for every file in the CWD, seperated by ";"); done
There are multiple numbers in the filenames, you just want the first one.
Therefore, we leave the spaces in, and pipe the result (that being only numbers and spaces from the current file name) into two other commands, xargs (removes leading and trailing whitespace) and cut -d' ' -f1` (returns only the part of the string before the first remaining space, i.e. the first number in our filename),
We save the resulting string in a variable "out" and print it with echo $out,
out=echo "${file//[!0-9]/ }"|xargs|cut -d' ' -f1; echo $out
Note that the number is still in a string data type. You can transform it to integer if you want by using double brackets preceeded by $ out_int=$((out))

Understanding sed expression 's/^\.\///g'

I'm studying Bash programming and I find this example but I don't understand what it means:
filtered_files=`echo "$files" | sed -e 's/^\.\///g'`
In particular the argument passed to sed after '-e'.
It's a bad example; you shouldn't follow it.
First, understanding the sed expression at hand.
s/pattern/replacement/flags is the a sed command, described in detail in man sed. In this case, pattern is a regular expression; replacement is what that pattern gets replaced with when/where found; and flags describe details about how that replacement should be done.
In this case, the s/^\.\///g breaks down as follows:
s is the sed command being run.
/ is the sigil used to separate the sections of this command. (Any character can be used as a sigil, and the person who chose to use / for this expression was, to be charitable, not thinking about what they were doing very hard).
^\.\/ is the pattern to be replaced. The ^ means that this replaces anything only at the beginning; \. matches only a period, vs . (which is regex for matching any character); and \/ matches only a / (vs /, which would go on to the next section of this sed command, being the selected sigil).
The next section is an empty string, which is why there's no content between the two following sigils.
g in the flags section indicates that more than one replacement can happen each line. In conjunction with ^, this has no meaning, since there can only be one beginning-of-the-line per line; further evidence that the person who wrote your example wasn't thinking much.
Using the same data structures, doing it better:
All of the below are buggy when handling arbitrary filenames, because storing arbitrary filenames in scalar variables is buggy in general.
Still using sed:
# Use printf instead of echo to avoid bugginess if your "files" string is "-n" or "-e"
# Use "#" as your sigil to avoid needing to backslash-escape all the "\"s
filtered_files=$(printf '%s\n' "$files" | sed -e 's#^[.]/##g'`)
Replacing sed with a bash builtin:
# This is much faster than shelling out to any external tool
filtered_files=${files//.\//}
Using better data structures
Instead of running
files=$(find .)
...instead:
files=( )
while IFS= read -r -d '' filename; do
files+=( "$filename" )
done < <(find . -print0)
That stores files in an array; it looks complex, but it's far safer -- works correctly even with filenames containing spaces, quote characters, newline literals, etc.
Also, this means you can do the following:
# Remove the leading ./ from each name; don't remove ./ at any other position in a name
filtered_files=( "${files[#]#./}" )
This means that a file named
./foo/this directory name (which has spaces) ends with a period./bar
will correctly be transformed to
foo/this directory name (which has spaces) ends with a period./bar
rather than
foo/this directory name (which has spaces) ends with a periodbar
...which would have happened with the original approach.
man sed. In particular:
-e script, --expression=script
add the script to the commands to be executed
And:
s/regexp/replacement/
Attempt to match regexp against the pattern space. If success-
ful, replace that portion matched with replacement. The
replacement may contain the special character & to refer to that
portion of the pattern space which matched, and the special
escapes \1 through \9 to refer to the corresponding matching
sub-expressions in the regexp.
In this case, it replaces any occurence of ./ at the beginning of a line with the empty string, in other words removing it.

Extracting sub-strings in Unix

I'm using cygwin on Windows 7. I want to loop through a folder consisting of about 10,000 files and perform a signal processing tool's operation on each file. The problem is that the files names have some excess characters that are not compatible with the operation. Hence, I need to extract just a certain part of the file names.
For example if the file name is abc123456_justlike.txt.rna I need to use abc123456_justlike.txt. How should I write a loop to go through each file and perform the operation on the shortened file names?
I tried the cut - b1-10 command but that doesn't let my tool perform the necessary operation. I'd appreciate help with this problem
Try some shell scripting, using the ${NAME%TAIL} parameter substitution: the contents of variable NAME are expanded, but any suffix material which matches the TAIL glob pattern is chopped off.
$ NAME=abc12345.txt.rna
$ echo ${NAME%.rna} #
# process all files in the directory, taking off their .rna suffix
$ for x in *; do signal_processing_tool ${x%.rna} ; done
If there are variations among the file names, you can classify them with a case:
for x in * ; do
case $x in
*.rna )
# do something with .rna files
;;
*.txt )
# do something else with .txt files
;;
* )
# default catch-all-else case
;;
esac
done
Try sed:
echo a.b.c | sed 's/\.[^.]*$//'
The s command in sed performs a search-and-replace operation, in this case it replaces the regular expression \.[^.]*$ (meaning: a dot, followed by any number of non-dots, at the end of the string) with the empty string.
If you are not yet familiar with regular expressions, this is a good point to learn them. I find manipulating string using regular expressions much more straightforward than using tools like cut (or their equivalents).
If you are trying to extract the list of filenames from a directory use the below command.
ls -ltr | awk -F " " '{print $9}' | cut -c1-10

Resources