I have to find files with selected permissions and list them as well as their number. Therefore I would like to pipe result of find command to shell and to the next command, which output I want to store in a variable so I could display it nicely later. I would like to have something like
for i in "$#"
do
find $filename -perm $i | tee /dev/tty | var=${wc -l}
echo "number of files with $i permission: $var"
done
but var=${wc -l} part doesn't work. Please help.
EDIT
I'm aware that I can put entire output of the command to a variable like
var=$(find $filename -perm $i | tee /dev/tty | wc -l)
but then I need only the result of wc -l. How would I get this number from that variable? Would it be possible to display it in reversed order, number first and then the list?
Copying To A TTY (Not Stdout!)
Pipeline components run in subshells, so even if they do assign shell variables (and the syntax for that was wrong), those shell variables are unset as soon as the pipeline exits (since the subshells only live as long as the pipeline does).
Thus, you need to capture the output of the entire pipeline into your variable:
var=$(find "$filename" -perm "$i" | tee /dev/tty | wc -l)
Personally, btw, I'd be teeing to /dev/stderr or /dev/fd/2 to avoid making behavior dependent on whether a TTY is available.
Actually Piping To Stdout
With bash 4.1, automatic file descriptor allocation lets you do the following:
exec {stdout_copy}>&1 # make the FD named in "$stdout_copy" a copy of FD 1
# tee over to "/dev/fd/$stdout_copy"
var=$(find "$filename" -perm "$i" | tee /dev/fd/"$stdout_copy" | wc -l)
exec {stdout_copy}>&- # close that copy previously created
echo "Captured value of var: $var"
With an older version of bash, you'd need to allocate a FD yourself -- in the below example, I'm choosing file descriptor number 3 (as 0, 1 and 2 are reserved for stdin, stdout and stderr, respectively):
exec 3>&1 # make copy of stdout
# tee to that copy with FD 1 going to wc in the pipe
var=$(find "$filename" -perm "$i" | tee /dev/fd/3 | wc -l)
exec 3>&- # close copy of stdout
Related
I have been trying to get a bash script to output different things on the terminal and logfile but am unsure of what command to use.
For example,
#!/bin/bash
freespace=$(df -h / | grep -E "/" | awk '{print $4}')
greentext="\033[32m"
bold="\033[1m"
normal="\033[0m"
logdate=$(date +"%Y%m%d")
logfile="$logdate"_report.log
exec > >(tee -i $logfile)
echo -e $bold"Quick system report for "$greentext"$HOSTNAME"$normal
printf "\tSystem type:\t%s\n" $MACHTYPE
printf "\tBash Version:\t%s\n" $BASH_VERSION
printf "\tFree Space:\t%s\n" $freespace
printf "\tFiles in dir:\t%s\n" $(ls | wc -l)
printf "\tGenerated on:\t%s\n" $(date +"%m/%d/%y") # US date format
echo -e $greentext"A summary of this info has been saved to $logfile"$normal
I want to omit the last output (echo "A summary...") in the logfile while displaying it in the terminal. Is there a command to do so? It would be great if a general solution can be provided instead of a specific one because I want to apply this to other scripts.
EDIT 1 (after applying >&6):
Files in dir: 7
A summary of this info has been saved to 20160915_report.log
Generated on: 09/15/16
One option:
exec 6>&1 # save the existing stdout
exec > >(tee -i $logfile) # like you had it
#... all your outputs
echo -e $greentext"A summary of this info has been saved to $logfile"$normal >&6
# writes to the original stdout, saved in file descriptor 6 ------------^^^
The >&6 sends echo's output to the saved file descriptor 6 (the terminal, if you're running this from an interactive shell) rather than to the output path set up by tee (which is on file descriptor 1). Tested on bash 4.3.46.
References: "Using exec" and "I/O Redirection"
Edit As OP found, the >&6 message is not guaranteed to appear after the lines printed by tee off stdout. One option is to use script, e.g., as in the answers to this question, instead of tee, and then print the final message outside of the script. Per the docs, the stdbuf answers to that question won't work with tee.
Try a dirty hack:
#... all your outputs
echo >&6 # <-- New line
echo -e $greentext ... >&6
Or, equally hackish, (Note that, per OP, this worked)
#... all your outputs
sleep 0.25s # or whatever time you want <-- New line
echo -e ... >&6
I have to test if pathname is a regular file and if it's length is greater 50 bytes , for this reason I do like this:
if [[ -f $path && `wc -c < $path` -gt 50 ]]; then ......
and it works , but , for curiosity , I tried to do also like this:
if [[ -f $path && `$path > wc -c` -gt 50 ]]; then ......
but it doesn't work and I don't understand why.
For this reason I ask you the difference between < and > operator in Bash.
< is "read from" -- redirecting input, while > is "write to" -- redirecting output. Both are followed by the name of the file to use. So
wc -c < $path
runs the wc command, reading from the file $path
$path > wc -c
runs the $path command, writing to the file wc
These operators are not commutative (position aren't swappable).
wc -c < $path means launch wc and use the file at $path as the input.
$path > wc -c means launch the executable at $path (which in your case $path isn't an executable) and send it's output to the file at wc.
As you can see the second one doesn't really make sense. Always make the executable the first operand (argument), and the file you are reading from or writing to the second operand.
< instructs the shell to take the contents of the file on the right side of the operator and provide them as input to the command on the left side.
> instructs the shell to take the output of the command on the left side and store it in the file named on the right side.
Accordingly, the command wc -c < $path is equivalent to cat $path | wc -c. $path > wc -c would mean "run the command $path and store the output in a file named wc (the -c would be discarded)."
There is an input like:
folder1
folder2
folder3
...
foldern
I would like to iterate over taking multiple lines at once and processes each line, remove the first / (and more but for now this is enough) and echo the. Iterating over in bash with a single thread can be slow sometimes. The alternative way of doing this would be splitting up the input file to N pieces and run the same script with different input and output N times, at the end you can merge the results.
I was wondering if this is possible with xargs.
Update 1:
Input:
/a/b/c
/d/f/e
/h/i/j
Output:
mkdir a/b/c
mkdir d/f/e
mkdir h/i/j
Script:
for i in $(<test); do
echo mkdir $(echo $i | sed 's/\///') ;
done
Doing it with xargs does not work as I would expect:
xargs -a test -I line --max-procs=2 echo mkdir $(echo $line | sed 's/\///')
Obviously I need a way to execute the sed on the input for each line, but using $() does not work.
You probably want:
--max-procs=max-procs, -P max-procs
Run up to max-procs processes at a time; the default is 1. If
max-procs is 0, xargs will run as many processes as possible at
a time. Use the -n option with -P; otherwise chances are that
only one exec will be done.
http://unixhelp.ed.ac.uk/CGI/man-cgi?xargs
With GNU Parallel you can do:
cat file | perl -pe s:/:: | parallel mkdir -p
or:
cat file | parallel mkdir -p {= s:/:: =}
I find that a piece of my bash script causes the hang up. I extract it here :
#!/bin/bash
cat << EndOfFspreadFile >> ./myscript.sh
echo Enter Source Path :
read SRCPATH
FILECNT=`find $SRCPATH/* 2>/dev/null | wc -l`
FILECNTERR=`find $SRCPATH/* 2>&1 | grep "find:" | wc -l`
echo count : $FILECNT
echo problems : $FILECNTERR
EndOfFspreadFile
echo done
This script is expected to just append the script part in the integrated block into myscript.sh file. But it just HANGS !
Thanks !
- Mohamed -
Your $ variables and back quotes will get expanded. You need to escape them in script.
Right now you end up searching the entire filesystem.
Basically, find $SRCPATH/* 2>/dev/null | wc -l gets executed as find /* 2>/dev/null | wc -l
Here is how you can rewrite it (just one line example):
FILECNT=\$(find \$SRCPATH/* 2>/dev/null | wc -l)
By the way, it's easy to find out if you run bash -x <your script>.
As being quite a newbie in linux, I have the follwing question.
I have list of files (this time resulting from svn status) and i want to create a script to loop them all and replace tabs with 4 spaces.
So I want from
....
D HTML/templates/t_bla.tpl
M HTML/templates/t_list_markt.tpl
M HTML/templates/t_vip.tpl
M HTML/templates/upsell.tpl
M HTML/templates/t_warranty.tpl
M HTML/templates/top.tpl
A + HTML/templates/t_r1.tpl
....
to something like
for i in <files>; expand -t4;do cp $i /tmp/x;expand -t4 /tmp/x > $i;done;
but I dont know how to do that...
You can use this command:
svn st | cut -c8- | xargs ls
This will cut the first 8 characters leaving only a list of file names, without Subversion flags. You can also add grep before cut to filter only some type of changes, like /^M/. xargs will pass the list of files as arguments to a given command (ls in this case).
I would use sed, like so:
for i in files
do
sed -i 's/\t/ /' "$i"
done
That big block in there is four spaces. ;-)
I haven't tested that, but it should work. And I'd back up your files just in case. The -i flag means that it will do the replacements on the files in-place, but if it messes up, you'll want to be able to restore them.
This assumes that $files contains the filenames. However, you can also use Adam's approach at grabbing the filenames, just use the sed command above without the "$i".
Not asking for any votes, but for the record I'll post the combined answer from #Adam Byrtek and #Dan Fego:
svn st | cut -c8- | xargs sed -i 's/\t/ /'
I could not test it with real subversion output, but this should do the job:
svn st | cut -c8- | while read file; do expand -t4 $file > "$file-temp"; mv "$file-temp" "$file"; done
svn st | cut -c8- will generate a list of files without subversion flags. read will then save each entry in the variable $file and expand is used to replace the tabs with four spaces in each file.
Not quite what you're asking, but perhaps you should be looking into commit hooks in subversion?
You could create a hook to block check-ins of any code that contains tabs at the start of a line, or contains tabs at all.
In the repo directory on your subversion server there'll be a directory called hooks. Put something in there which is executable called 'pre-commit' and it'll be run before anything is allowed to be committed. It can return a status to block the commit if you wish.
Here's what I have to stop php files with syntax errors being checked in:
#!/bin/bash
REPOS="$1"
TXN="$2"
PHP="/usr/bin/php"
SVNLOOK=/usr/bin/svnlook
$SVNLOOK log -t "$TXN" "$REPOS" | grep "[a-zA-Z0-9]" > /dev/null
if [ $? -ne 0 ]
then
echo 1>&2
echo "You must enter a comment" 1>&2
exit 1
fi
CHANGED=`$SVNLOOK changed -t "$TXN" "$REPOS" | awk '{print $2}'`
for LINE in $CHANGED
do
FILE=`echo $LINE | egrep \\.php$`
if [ $? == 0 ]
then
MESSAGE=`$SVNLOOK cat -t "$TXN" "$REPOS" "${FILE}" | $PHP -l`
if [ $? -ne 0 ]
then
echo 1>&2
echo "***********************************" 1>&2
echo "PHP error in: ${FILE}:" 1>&2
echo "$MESSAGE" | sed "s| -| $FILE|g" 1>&2
echo "***********************************" 1>&2
exit 1
fi
fi
done