How do we use the piped command in Perl? [closed] - linux

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
This is the command -
$ find /var/opt/ -type f -mtime -1 -print0 | du -s |cut -f1
498172
When I run from the command line in Linux it gives output - the size.
I want to run the same command from Perl and need to capture the output in a variable.
I tried this:
my $cmd = "find /var/opt/ -type f -mtime -1 -print0 | du -s |cut -f1";
my #output = `$cmd`;
I am receiving an entirely different output - '\20' instead of 498172.
Can someone help me with what I am missing?

You can also calculate the size in the Perl script without needing to call the external command du:
use feature qw(say);
use strict;
use warnings;
use File::Find;
my $size = 0;
my $dir = '/var/opt';
find(sub {-f $_ && -M _ < 1 && do {$size += -s _ }}, $dir);
say int($size/1024), " KiB";
Note this reports the apparent size, not the disk usage. See How to get the actual directory size (out of du)? for more information.

Related

How to print output twice in Linux? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Which command is use to print the file name twice on output?
I want to write a pipe that List all the files beginning with the character ā€˜Pā€™ on the screen twice in succession.
Something like:
ls -1 | while read i ; do echo $i $i ; done
ā€¦ should do the trick.
ls | sed -E 's/^(P.*)/\1 \1/'
ls, when used with a pipe, puts 1 file per line.
We use sed with extended RE support -E.
We capture the name of any word beginning with P: ^(P.*)
and replace it with itself, a space, followed by itself \1 is a back-reference to what is captured in the parenthesis ( ... ) .
I suggest to use the find utility:
find . -maxdepth 1 -type f -name 'P*' -print -print

shell - faster alternative to "find"

I'm writing a shell script wich should output the oldest file in a directory.
This directory is on a remote server and has (worst case) between 1000 and 1500 (temporary) files in it. I have no access to the server and I have no influence on how the files are stored. The server is connect through a stable but not very fast line.
The result of my script is passed to a monitoring system wich in turn allerts the staff if there are too many (=unprocessed) files in the directory.
Unfortunately the monitoring system only allows a maximun execution time of 30 seconds for my script before a timeout occurs.
This wasn't a problem when testing with small directories, this wasn't a problem. Testing with the target directory over the remote-mounted directory (approx 1000 files) it is.
So I'm looking for the fastest way to get things like "the oldest / newest / largest / smallest" file in a directory (not recursive) without using 'find' or sorting the output of 'ls'.
Currently I'm using this statement in my sh script:
old)
# return oldest file (age in seconds)
oldest=`find $2 -maxdepth 1 -type f | xargs ls -tr | head -1`
timestamp=`stat -f %B $oldest`
curdate=`date +%s`
echo `expr $(($curdate-$timestamp))`
;;
and I tried this one:
gfind /livedrive/669/iwt.save -type f -printf "%T# %P\n" | sort -nr | tail -1 | cut -d' ' -f 2-
wich are two of many variants of statements one can find using google.
Additional information:
I'writing this on a FreeBSD Box with sh und bash installed. I have full access to the box and can install programs if needed. For reference: gfind is the GNU-"find" utuility as known from linux as FreeBSD has another "find" installed by default.
any help is appreciated
with kind regards,
dura-zell
For the oldest/newest file issue, you can use -t option to ls which sorts the output using the time modified.
-t Sort by descending time modified (most recently modified first).
If two files have the same modification timestamp, sort their
names in ascending lexicographical order. The -r option reverses
both of these sort orders.
For the size issue, you can use -S to sort file by size.
-S Sort by size (largest file first) before sorting the operands in
lexicographical order.
Notice that for both cases, -r will reverse the order of the output.
-r Reverse the order of the sort.
Those options are available on FreeBSD and Linux; and must be pretty common in most implementations of ls.
Let use know if it's fast enough.
In general, you shouldn't be parsing the output of ls. In this case, it's just acting as a wrapper around stat anyway, so you may as well just call stat on each file, and use sort to get the oldest.
old) now=$(date +%s)
read name timestamp < <(stat -f "%N %B" "$2"/* | sort -k2,2n)
echo $(( $now - $timestamp ))
The above is concise, but doesn't distinguish between regular files and directories in the glob. If that is necessary, stick with find, but use a different form of -exec to minimize the number of calls to stat:
old ) now=$(date +%s)
read name timestamp < <(find "$2" -maxdepth 1 -type f -exec stat -f "%N %B" '{}' + | sort -k2,2n)
echo $(( $now - $timestamp ))
(Neither approach works if a filename contains a newline, although since you aren't using the filename in your example anyway, you can avoid that problem by dropping %N from the format and just sorting the timestamps numerically. For example:
read timestamp < <(stat -f %B "$2"/* | sort -n)
# or
read timestamp < <(find "$2" -maxdepth 1 -type f -exec stat -f %B '{}' + | sort -n)
)
Can you try creating a shell script that will reside in the remote host and when executed will provide the required output. Then from your local machine just use ssh or something like that to run that. In this way the script will run locally there. Just a thought :-)

listing file in unix and saving the output in a variable(Oldest File fetching for a particular extension)

This might be a very simple thing for a shell scripting programmer but am pretty new to it. I was trying to execute the below command in a shell script and save the output into a variable
inputfile=$(ls -ltr *.{PDF,pdf} | head -1 | awk '{print $9}')
The command works fine when I fire it from terminal but fails when executed through a shell script (sh). Why is that the command fails, does it mean that shell script doesn't support the command or am I doing it wrong? Also how do I know if a command will work in shell or not?
Just to give you a glimpse of my requirement, I was trying to get the oldest file from a particular directory (I also want to make sure upper case and lower case extensions are handled). Is there any other way to do this ?
The above command will work correctly only if BOTH *.pdf and *.PDF files are in the directory you are currently.
If you would like to execute it in a directory with only one of those you should consider using e.g.:
inputfiles=$(find . -maxdepth 1 -type f \( -name "*.pdf" -or -name "*.PDF" \) | xargs ls -1tr | head -1 )
NOTE: The above command doesn't work with files with new lines, or with long list of found files.
Parsing ls is always a bad idea. You need another strategy.
How about you make a function that gives you the oldest file among the ones given as argument? the following works in Bash (adapt to your needs):
get_oldest_file() {
# get oldest file among files given as parameters
# return is in variable get_oldest_file_ret
local oldest f
for f do
[[ -e $f ]] && [[ ! $oldest || $f -ot $oldest ]] && oldest=$f
done
get_oldest_file_ret=$oldest
}
Then just call as:
get_oldest_file *.{PDF,pdf}
echo "oldest file is: $get_oldest_file_ret"
Now, you probably don't want to use brace expansions like this at all. In fact, you very likely want to use the shell options nocaseglob and nullglob:
shopt -s nocaseglob nullglob
get_oldest_file *.pdf
echo "oldest file is: $get_oldest_file_ret"
If you're using a POSIX shell, it's going to be a bit trickier to have the equivalent of nullglob and nocaseglob.
Is perl an option? It's ubiquitous on Unix.
I would suggest:
perl -e 'print ((sort { -M $b <=> -M $a } glob ( "*.{pdf,PDF}" ))[0]);';
Which:
uses glob to fetch all files matching the pattern.
sort, using -M which is relative modification time. (in days).
fetches the first element ([0]) off the sort.
Prints that.
As #gniourf_gniourf says, parsing ls is a bad idea. Such as leaving unquoted globs, and generally not counting for funny characters in file names.
find is your friend:
#!/bin/sh
get_oldest_pdf() {
#
# echo path of oldest *.pdf (case-insensitive) file in current directory
#
find . -maxdepth 1 -mindepth 1 -iname "*.pdf" -printf '%T# %p\n' \
| sort -n \
| tail -1 \
| cut -d\ -f1-
}
whatever=$(get_oldest_pdf)
Notes:
find has numerous ways of formatting the output, including
things like access time and/or write time. I used '%T# %p\n',
where %T# is last write time in UNIX time format incl.fractal part.
This will never containt space so it's safe to use as separator.
Numeric sort and tail get the last item, sorting by the time,
cut removes the time from the output.
I used IMO much easier to read/maintain pipe notation, with help of \.
the code should run on any POSIX shell,
You could easily adjust the function to parametrize the pattern,
time used (access/write), control the search depth or starting dir.

what is the working of this command ls . | xargs -i -t cp ./{} $1 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I am a new bee to bash scripting. while studying Advanced bash scripting I came across this command. I'm not understand how the command is working and what is the use of curly braces. Thanks in advance.
Your command:
ls . | xargs -i -t cp ./{} $1
could be divided into the following parts:
ls .
List the current directory (this will list all the files/directories but the hidden ones)
| xargs -i -t cp ./{} $1
Basically the xargs breaks the piped output (ls in this case) and provides each element in the list as input to the following command (cp in this case). The -t option is to show in the stderr what xargs is actually executing. The -i is used for string replacement. In this case since nothing has been provided it will substitute the {} by the input. $1 is the name of the destination where your files will be copied (I guess in this case it should be a directory for the command to make sense otherwise you will be copying all the files to the same destination).
So for example, if you have lets say a directory that has files called a, b, c. When you run this command it will perform the following:
cp ./a $1
cp ./b $1
cp ./c $1
NOTE:
The -i option is deprecated, -I (uppercase i) should be used instead

How to color ls - l command's columns [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I wonder if possible to have ls -l colored. I'm not talking about --color, of course.
I found an useful alias for display octal permission in an ls -l command, now, it's possible to color it? In the same way, is possible when I do ls -l, display only permissions in red or something?
I don't know how to use color code, but grep has --color option
If the first line of ls -l is not important to you, you can consider using grep
ls -l | grep --color=always '[d-][r-][w-][x-][r-][w-][x-][r-][w-][x-]'
or in shorter form:
ls -l | grep --color=always '[d-]\([r-][w-][x-]\)\{3\}'
You can use several utilities to do it, like piping the output of ls (OPTIONS...) to supercat (after defiining the rules). Or to highlight (after defining the rules).
Or use awk/sed to pretty print based on regexes. E.g. with gensub in awk, you can insert ANSI color codes to the output...
The first thing that came into my mind is that you can use --color=auto for this:
ls -l --color=auto
And it can be handy to create an alias:
alias lls='ls -l --color=auto'
However I see you don't want that. For that we have to create a more complex function that use the echo -e "colours...":
print_line () {
red='\e[0;31m'
endColor='\e[0m'
first=${1%% *}
rest=${1#* }
echo -e "${red}$first${endColor} $rest"
}
lls () {
IFS=$'\n'; while read line;
do
# echo "$line"
print_line $line
done <<< "$(find $1 -maxdepth 1 -printf '%M %p\n')"
}
If you store them in ~/.bashrc and source it (. ~/.bashrc) then whenever you do lls /some/path it will execute these functions.
If you're asking if there is an option to specify custom column-specific colors in ls, I don't think so. But you can do something like.
> red() { red='\e[0;31m'; echo -ne "${red}$1 "; tput sgr0; echo "${*:2}"; }
> while read -r line; do red $line; done < <(ls -l)

Resources