I have an trouble with cygwin and qoutemarks.
This works:
grep FOO /path/to/files\ with\ spaces/*
grep FOO "/path/to/files with spaces/file1.txt"
But this does not:
grep FOO "/path/to/files with spaces/*"
grep FOO '/path/to/files with spaces/*'
The error messange is: grep: /path/to/files with spaces/*: No such file or directory
It the asterisk interpreted in some special way or am I missing something completely obvious, or is something weird going on.
Are you running inside bash? man bash for the full details.
Basically, wildcard expansion is done by the shell in unix, not by the commands themselves. Let's say that I have four files, a, b, c and d in my folder. set -x tells bash to echo the command it is actually going to attempt to run after it has munged what you typed, so we'll use that here.
$ set -x
$ echo *
+ echo a b c d
a b c d
That line starting + is printed by bash: bash actually passes a b c d to echo. echo never sees the * you typed.
$ echo "*"
+ echo '*'
*
This time you told bash not to do filename expansion on the * by quoting it. Thus echo now sees the *.
As for your original query, try
grep FOO '/path/to/files with spaces/'*
Related
I implemented the following command to create a file called * (asterisk):
echo > '*'
Now I'm supposed to remove this file without using any quotations.
I know how to remove this by using quotations, but not sure how without using quotations.
I tried the following commands which I was sure that they won't work because of command line expansion:
rm ./*
rm /*
If someone can help me with this, I would greatly appreciate it.
I think you're supposed to work this out yourself :-)
The simplest solution not involving quoting is to use the pattern [*]. Bracket expressions in a shell work much like character classes in regular expressions so that will match a file whose name is the single character *. Thus, you can delete your file with
rm [*]
Note that you cannot use that pattern to create a file named * because the shell substitutes words containing patterns with the name(s) of the files which match the pattern; if no such file exists, then the pattern is not matched and no substitution is performed. So if there is no file named *, then touch [*] will create a file named [*].
You could use history expansion. If the rm command directly follows the echo command, you can use !$:
echo > '*'
rm !$
!$ is shorthand for !!:$: repeat the last word ($) of the last command (!!).
If there are commands between the echo and the rm command, you can find the history number using fc -l:
$ echo > '*'
$ cmd1
$ cmd2
$ cmd3
$ fc -l
[...]
27628 echo > '*'
27629 cmd1
27630 cmd2
27631 cmd3
$ rm !27628:$
!27628 expands to the command with that number in the history, and $ is again the last word of that command.
If you have to run this in a script, you can't really look up the command number and insert it, but you can count the number of commands between the echo and the rm and use a relative event designator:
echo > '*'
cmd1
rm !-2:$
where !-2 refers to the command two lines back. Notice that history expansion is by default disabled in non-interactive shells; use
shopt -o history
to enable it.
You could use rm -i * if the number of files is not too big. This will ask for confirmation for every single file. Confirm deletion only for the file * and reject it for all others.
Hi… Need a little help here…
I tried to emulate the DOS' dir command in Linux using bash script. Basically it's just a wrapped ls command with some parameters plus summary info. Here's the script:
#!/bin/bash
# default to current folder
if [ -z "$1" ]; then var=.;
else var="$1"; fi
# check file existence
if [ -a "$var" ]; then
# list contents with color, folder first
CMD="ls -lgG $var --color --group-directories-first"; $CMD;
# sum all files size
size=$(ls -lgGp "$var" | grep -v / | awk '{ sum += $3 }; END { print sum }')
if [ "$size" == "" ]; then size="0"; fi
# create summary
if [ -d "$var" ]; then
folder=$(find $var/* -maxdepth 0 -type d | wc -l)
file=$(find $var/* -maxdepth 0 -type f | wc -l)
echo "Found: $folder folders "
echo " $file files $size bytes"
fi
# error message
else
echo "dir: Error \"$var\": No such file or directory"
fi
The problem is when the argument contains an asterisk (*), the ls within the script acts differently compare to the direct ls command given at the prompt. Instead of return the whole files list, the script only returns the first file. See the video below to see the comparation in action. I don't know why it behaves like that.
Anyone knows how to fix it? Thank you.
Video: problem in action
UPDATE:
The problem has been solved. Thank you all for the answers. Now my script works as expected. See the video here: http://i.giphy.com/3o8dp1YLz4fIyCbOAU.gif
The asterisk * is expanded by the shell when it parses the command line. In other words, your script doesn't get a parameter containing an asterisk, it gets a list of files as arguments. Your script only works with $1, the first argument. It should work with "$#" instead.
This is because when you retrieve $1 you assume the shell does NOT expand *.
In fact, when * (or other glob) matches, it is expanded, and broken into segments by $IFS, and then passed as $1, $2, etc.
You're lucky if you simply retrieved the first file. When your first file's path contains spaces, you'll get an error because you only get the first segment before the space.
Seriously, read this and especially this. Really.
And please don't do things like
CMD=whatever you get from user input; $CMD;
You are begging for trouble. Don't execute arbitrary string from the user.
Both above answers already answered your question. So, i'm going a bit more verbose.
In your terminal is running the bash interpreter (probably). This is the program which parses your input line(s) and doing "things" based on your input.
When you enter some line the bash start doing the following workflow:
parsing and lexical analysis
expansion
brace expansion
tidle expansion
variable expansion
artithmetic and other substitutions
command substitution
word splitting
filename generation (globbing)
removing quotes
Only after all above the bash
will execute some external commands, like ls or dir.sh... etc.,
or will do so some "internal" actions for the known keywords and builtins like echo, for, if etc...
As you can see, the second last is the filename generation (globbing). So, in your case - if the test* matches some files, your bash expands the willcard characters (aka does the globbing).
So,
when you enter dir.sh test*,
and the test* matches some files
the bash does the expansion first
and after will execute the command dir.sh with already expanded filenames
e.g. the script get executed (in your case) as: dir.sh test.pas test.swift
BTW, it acts exactly with the same way for your ls example:
the bash expands the ls test* to ls test.pas test.swift
then executes the ls with the above two arguments
and the ls will print the result for the got two arguments.
with other words, the ls don't even see the test* argument - if it is possible - the bash expands the wilcard characters. (* and ?).
Now back to your script: add after the shebang the following line:
echo "the $0 got this arguments: $#"
and you will immediatelly see, the real argumemts how your script got executed.
also, in such cases is a good practice trying to execute the script in debug-mode, e.g.
bash -x dir.sh test*
and you will see, what the script does exactly.
Also, you can do the same for your current interpreter, e.g. just enter into the terminal
set -x
and try run the dir.sh test* = and you will see, how the bash will execute the dir.sh command. (to stop the debug mode, just enter set +x)
Everbody is giving you valuable advice which you should definitely should follow!
But here is the real answer to your question.
To pass unexpanded arguments to any executable you need to single quote them:
./your_script '*'
The best solution I have is to use the eval command, in this way:
#!/bin/bash
cmd="some command \"with_quetes_and_asterisk_in_it*\""
echo "$cmd"
eval $cmd
The eval command takes its arguments and evaluates them into the command as the shell does.
This solves my problem when I need to call a command with asterisk '*' in it from a script.
It has been a long time since I did much bash script writing.
This is a bash script to copy and rename files by deleting all before the first period delimiter:
#!/bin/bash
mkdir fullname
mv *.audio fullname
cd fullname
for x in * ;
do
cp $x ../`echo $x | cut -d "." -f 2-`
done
cd ..
ls
It works well for file names with no embedded spaces but not for those with spaces.
How can I change the code to fix this simple Linux bash script? Any suggestions for improving the code for other reasons would also be welcome.
Example filenames, some with embedded spaces and some not (from link)
http://www.homenetvideo.com/demo/index.php?/Radio%20%28VLC%29
Ambient.A6.SOMA Space Station.audio
Blues.B9.Blues Radio U.K.audio
Classical.K3.Radio Stephansdom - Vienna.audio
College.CI.KDVS U of California, Davis.audio
Country.Q1.K-FROG.audio
Easy.G4.WNYU.audio
Eclectic.M2.XPN.audio
Electronica.E2.Rinse.audio
Folk.F1.Radionomy.audio
Hiphop.H1.NPR.audio
Indie.I4.WAUG.audio
Jazz.J6.KCSM.audio
Latin.L3.Mega.audio
Misc.X7.Gaydio.audio
News.N9.KQED.audio
Oldies.O1.Lonestar.audio
OldTime.Y1.Roswell.audio
Progressive.P1.Aural Moon.audio
Rock.R8.WXRT.audio
Scanner.Z3.Montreal.audio
Soul.S1.181.FM.audio
Talk.T2.TWiT.audio
World.W3.Persian.audio
http://lh5.googleusercontent.com/-QjLEiAtT4cw/U98_UFcWvvI/AAAAAAAABv8/gyPhbg8s7Bw/w681-h373-no/homenet-radio.png
Whenever you deal with file names that might have spaces in them, you must reference them as "$x" rather than just $x. That's what's causing your cp command to fail.
Your echo command is also problematic. Although echo does the right thing for simple spaces - it echoes a file named A B C as A B C - it will still fail if you have more than one consecutive space in the name, or whitespace that isn't a simple space character.
Instead of passing the file names to external programs for processing, which always requires getting them through the whitespace-hostile command line, you should use bash built-in functions for string manipulations wherever possible, e.g. ${x%%foo}, ${x#bar} and similar functions. The man page describes them under "Parameter expansion".
Here's my suggestion:
#!/bin/bash
shopt -s nullglob
mkdir fullname
mv *.audio fullname
(
cd fullname || exit
for x in *; do
cp "$x" "../${x#*.}"
done
)
ls
nullglob prevents * from presenting itself if no file matches it. Just optional.
() summons a subshell and saves you from changing back to another directory.
|| exit terminates the subshell if cd fails to change directory.
${x#*.} removes the <first>. from $x and expands it.
I understand that one technique for dealing with spaces in filenames is to enclose the file name with single quotes: "'".
Why is it that the following code called, "echo.sh" works on a directory containing filenames with spaces, but the program "ls.sh" does Not work, where the only difference is 'echo' replaced with 'ls'?
echo.sh
#!/bin/sh
for f in *
do
echo "'$f'"
done
Produces:
'a b c'
'd e f'
'echo.sh'
'ls.sh'
But, "ls.sh" fails:
#!/bin/sh
for f in *
do
ls "'$f'"
done
Produces:
ls: cannot access 'a b c': No such file or directory
ls: cannot access 'd e f': No such file or directory
ls: cannot access 'echo.sh': No such file or directory
ls: cannot access 'ls.sh': No such file or directory
you're actually adding redundant "'" (which your echo invocation shows)
try this:
#!/bin/sh
for f in *
do
ls "$f"
done
change the following line from
ls "'$f'"
into
ls "$f"
Taking a closer look at the output of your echo.sh script you might notice the result is probably not quite the one you expected as every line printed is surrounded by ' characters like:
'file-1'
'file-2'
and so on.
Files with that names really don't exist on your system. Using them with ls ls will try to look up a file named 'file-1' instead of file-1 and a file with such a name just doesn't exist.
In your example you just added one pair of 's too much. A single pair of double quotes" is enough to take care of spaces that might contained in the file names:
#!/bin/sh
for f in *
do
ls "$f"
done
Will work pretty fine even with file names containing spaces. The problem you are trying to avoid would only arise if you didn't use the double quotes around $f like this:
#!/bin/sh
for f in *
do
ls $f # you might get into trouble here
done
What about this ? =)
#!/bin/sh
for f in *; do
printf -- '%s\n' "$f"
done
I want to loop through a path list that I have gotten from an echo $VARIABLE command.
For example:
echo $MANPATH will return
/usr/lib:/usr/sfw/lib:/usr/info
So that is three different paths, each separated by a colon. I want to loop though each of those paths. Is there a way to do that? Thanks.
Thanks for all the replies so far, it looks like I actually don't need a loop after all. I just need a way to take out the colon so I can run one ls command on those three paths.
You can set the Internal Field Separator:
( IFS=:
for p in $MANPATH; do
echo "$p"
done
)
I used a subshell so the change in IFS is not reflected in my current shell.
The canonical way to do this, in Bash, is to use the read builtin appropriately:
IFS=: read -r -d '' -a path_array < <(printf '%s:\0' "$MANPATH")
This is the only robust solution: will do exactly what you want: split the string on the delimiter : and be safe with respect to spaces, newlines, and glob characters like *, [ ], etc. (unlike the other answers: they are all broken).
After this command, you'll have an array path_array, and you can loop on it:
for p in "${path_array[#]}"; do
printf '%s\n' "$p"
done
You can use Bash's pattern substitution parameter expansion to populate your loop variable. For example:
MANPATH=/usr/lib:/usr/sfw/lib:/usr/info
# Replace colons with spaces to create list.
for path in ${MANPATH//:/ }; do
echo "$path"
done
Note: Don't enclose the substitution expansion in quotes. You want the expanded values from MANPATH to be interpreted by the for-loop as separate words, rather than as a single string.
In this way you can safely go through the $PATH with a single loop, while $IFS will remain the same inside or outside the loop.
while IFS=: read -d: -r path; do # `$IFS` is only set for the `read` command
echo $path
done <<< "${PATH:+"${PATH}:"}" # append an extra ':' if `$PATH` is set
You can check the value of $IFS,
IFS='xxxxxxxx'
while IFS=: read -d: -r path; do
echo "${IFS}${path}"
done <<< "${PATH:+"${PATH}:"}"
and the output will be something like this.
xxxxxxxx/usr/local/bin
xxxxxxxx/usr/bin
xxxxxxxx/bin
Reference to another question on StackExchange.
for p in $(echo $MANPATH | tr ":" " ") ;do
echo $p
done
IFS=:
arr=(${MANPATH})
for path in "${arr[#]}" ; do # <- quotes required
echo $path
done
... it does take care of spaces :o) but also adds empty elements if you have something like:
:/usr/bin::/usr/lib:
... then index 0,2 will be empty (''), cannot say why index 4 isnt set at all
This can also be solved with Python, on the command line:
python -c "import os,sys;[os.system(' '.join(sys.argv[1:]).format(p)) for p in os.getenv('PATH').split(':')]" echo {}
Or as an alias:
alias foreachpath="python -c \"import os,sys;[os.system(' '.join(sys.argv[1:]).format(p)) for p in os.getenv('PATH').split(':')]\""
With example usage:
foreachpath echo {}
The advantage to this approach is that {} will be replaced by each path in succession. This can be used to construct all sorts of commands, for instance to list the size of all files and directories in the directories in $PATH. including directories with spaces in the name:
foreachpath 'for e in "{}"/*; do du -h "$e"; done'
Here is an example that shortens the length of the $PATH variable by creating symlinks to every file and directory in the $PATH in $HOME/.allbin. This is not useful for everyday usage, but may be useful if you get the too many arguments error message in a docker container, because bitbake uses the full $PATH as part of the command line...
mkdir -p "$HOME/.allbin"
python -c "import os,sys;[os.system(' '.join(sys.argv[1:]).format(p)) for p in os.getenv('PATH').split(':')]" 'for e in "{}"/*; do ln -sf "$e" "$HOME/.allbin/$(basename $e)"; done'
export PATH="$HOME/.allbin"
This should also, in theory, speed up regular shell usage and shell scripts, since there are fewer paths to search for every command that is executed. It is pretty hacky, though, so I don't recommend that anyone shorten their $PATH this way.
The foreachpath alias might come in handy, though.
Combining ideas from:
https://stackoverflow.com/a/29949759 - gniourf_gniourf
https://stackoverflow.com/a/31017384 - Yi H.
code:
PATHVAR='foo:bar baz:spam:eggs:' # demo path with space and empty
printf '%s:\0' "$PATHVAR" | while IFS=: read -d: -r p; do
echo $p
done | cat -n
output:
1 foo
2 bar baz
3 spam
4 eggs
5
You can use Bash's for X in ${} notation to accomplish this:
for p in ${PATH//:/$'\n'} ; do
echo $p;
done
OP's update wants to ls the resulting folders, and has pointed out that ls only requires a space-separated list.
ls $(echo $PATH | tr ':' ' ') is nice and simple and should fit the bill nicely.