How can i format the output of stat expression in Linux Gnome Terminal? - linux

I am really newbie in Linux(Fedora-20) and I am trying to learn basics
I have the following command
echo "`stat -c "The file "%n" was modified on ""%y" *Des*`"
This command returns me this output
The file Desktop was modified on 2014-11-01 18:23:29.410148517 +0000
I want to format it as this:
The file Desktop was modified on 2014-11-01 at 18:23
How can I do this?

You can't really do that with stat (unless you have a smart version of stat I'm not aware of).
With date
Very likely, your date is smart enough and handles the -r switch.
date -r Desktop +"The file Desktop was modified on %F at %R"
Because of your glob, you'll need a loop to handle all files that match *Des* (in Bash):
shopt -s nullglob
for file in *Des*; do
date -r "$file" +"The file ${file//%/%%} was modified on %F at %R"
done
With find
Very likely your find has a rich -printf option:
find . -maxdepth 1 -name '*Des*' -printf 'The file %f was modified on %TY-%Tm-%Td at %TH:%TM\n'
I want to use stat
(because your date doesn't handle the -r switch, you don't want to use find or just because you like using as most tools as possible to impress your little sister). Well, in that case, the safest thing to do is:
date -d "#$(stat -c '%Y' Desktop)" +"The file Desktop was modified on %F at %R"
and with your glob requirement (in Bash):
shopt -s nullglob
for file in *Des*; do
date -d "#$(stat -c '%Y' -- "$file")" +"The file ${file//%/%%} was modified on %F at %R"
done

stat -c "The file "%n" was modified on ""%y" *Des* | awk 'BEGIN{OFS=" "}{for(i=1;i<=7;++i)printf("%s ",$i)}{print "at " substr($8,0,6)}'
I have use here awk modify your code. what i have done in this code, from field 1,7 i printed it using for loop, i need to modify field 8, so i used substr to extract 1st 5 character.

Related

How do I create a recursive file list with md5sum in Linux and output to csv

I would like to list the files (ideally with an md5sum) within a directory and subdirectories in Ubuntu and output the results to a csv file. I would like the output to be in the following format.
File Name, File Path, File Size (bytes), Created Date Time (dd/mm/yyyy hh:mm:ss), Modified Date Time (dd/mm/yyyy hh:mm:ss), md5sum
I have played around with the ls command but can seem to get the output correct. Is there a better way to do this?
Thanks
Create the following script that outputs a CSV line for a given filepath argument:
#!/bin/bash
set -eu
filepath=$1
qfilepath=${filepath//\\/\\\\} # Quote backslashes.
qfilepath=${qfilepath//\"/\\\"} # Quote doublequotes.
file=${qfilepath##*/} # Remove the path.
stats=($(stat -c "%s %W %Y" "$filepath"))
size=${stats[0]}
ctime=$(date --date #"${stats[1]}" +'%d/%m/%Y %H:%M:%S')
mtime=$(date --date #"${stats[2]}" +'%d/%m/%Y %H:%M:%S')
md5=$(md5sum < "$filepath")
md5=${md5%% *} # Remove the dash.
printf '"%s","%s",%s,%s,%s,%s\n' \
"$file" "$qfilepath" "$size" "$ctime" "$mtime" $md5
Now call it with
find /path/to/dir -type f -exec ~/csvline.sh {} \;
Note that the creation time is often not supported by the file system.

Going through files recursively and receiving file information

i am fairly new to bash scripts and right now i am trying to go through files recursively in order to recieve some info about these files (name, size, ...)
My attempt so far:
for i in *.txt; do
stat -c '%n' "$i" >> $2
wc -l -w >> $2
stat -c '%a %A %U' "$i" >> $2
done
$2 is the file, where i want to log this info...
Thanks in advance!
EDIT I should post the problem aswell, sorry
i am receiving this error message:
stat: cannot stat '*.txt': No such file or directory
But the file should be in the $i variable, shouldnt it?
If there are no files matching the glob, Bash will by default return just the glob itself. So you are trying to process a file named literally "*.txt" (which by the way is actually a valid file name). What you probably want is shopt -s nullglob to expand a non-matching glob to nothing.
The goto tool for recursive file operations is find - it is very powerful, so go make a cup of tea, settle back and type man find

Linux: batch filename change adding creation date

i have a directory with a lot of sub-directories including files.
For each WAV file i would like to rename WAV file by adding creation date (date when file WAV has been firstly created) at the beginning of the file (without changing timestamps of file itself).
Next step would be to convert the WAV file to MP3 file, so i will save hard drive space.
for that purpose, i'm trying to create a bash script but i'm having some issues.
I want to keep the same structure as original directory and therefore i was thinking of something like:
for file in `ls -1 *.wav`
do name=`stat -c %y $file | awk -F"." '{ print $1 }' | sed -e "s/\-//g" -e "s/\://g" -e "s/[ ]/_/g"`.wav
cp -r --preserve=timestampcp $dir_original/$file $dir_converted/$name
done
Don't use ls to generate a list of file names, just let the shell glob them (that's what ls *.wav does anyway):
for file in ./*.wav ; do
I think you want the timestamp in the format YYYYMMDD_HHMMSS ?
You could use GNU date with stat to have a somewhat neater control of the output format:
epochtime=$(stat -c %Y "$file" )
name=$(date -d "#$epochtime" +%Y%m%d_%H%M%S).wav
stat -c %Y (or %y) gives the last modification date, but you can't really get the date of the file creation on Linux systems.
That cp looks ok, except for the stray cp at the end of timestampcp, but that must be a typo. If you do *.wav, the file names will be relative to current directory anyway, so no need to prefix with $dir_original/.
If you want to walk through a whole subdirectory, use Bash's globstar feature, or find. Something like this:
shopt -s globstar
cd "$sourcedir"
for file in ./**/*.wav ; do
epochtime=$(stat -c %Y "$file" )
name=$(date -d "#$epochtime" +%Y%m%d_%H%M%S).wav
dir=$(dirname "$file")
mkdir -p "$target/$dir"
cp -r --preserve=timestamp "$file" "$target/$dir/$name"
done
The slight inconvenience here is that cp can't create the directories in the path, so we need to use mkdir there. Also, I'm not sure if you wanted to keep the original filename as part of the resulting one, this would remove it and just replace the file names with the timestamp.
I did some experimenting with the calculation of name to see if I could get it more succinctly, and came up with this:
name=$(date "+%Y%m%d_%H%M%S" -r "$file")
I wanted to append all file names in that folder with the date they were created , and below works perfectly.
#############################
#!/bin/sh
for file in `ls *.JPG`;
do
mv -f "$file" "$(date -r "$file" +"%Y%m%d_%H_%M_%S")_"$file".jpg"
done
##############################

bash loop file echo to each file in the directory

I searched a while and tried it by myself but unable to get this sorted so far. My folder looks below, 4 files
1.txt, 2.txt, 3.txt, 4.txt, 5.txt, 6.txt
I want to print file modified time and echo the time stamp in it
#!/bin/bash
thedate= `ls | xargs stat -s | grep -o "st_mtime=[0-9]*" | sed "s/st_mtime=//g"` #get file modified time
files= $(ls | grep -Ev '(5.txt|6.txt)$') #exclud 5 and 6 text file
for i in $thedate; do
echo $i >> $files
done
I want to insert each timestamp to each file. but having "ambiguous redirect" error. am I doing it incorrectly? Thanks
In this case, files is a "list" of files, so you probably want to add another loop to handle them one by one.
Your description is slightly confusing but, if your intent is to append the last modification date of each file to that file, you can do something like:
for fspec in [1-4].txt ; do
stat -c %y ${fspec} >>${fspec}
done
Note I've used stat -c %y to get the modification time such as 2017-02-09 12:21:22.848349503 +0800 - I'm not sure what variant of stat you're using but mine doesn't have a -s option. You can still use your option, you just have to ensure it's done on each file in turn, probably something like (in the for loop above):
stat -s ${fspec} | grep -o "st_mtime=[0-9]*" | sed "s/st_mtime=//g" >>${fspec}
You can not redirect the output to several files as in > $files.
To process several files you need something like:
#!/bin/bash
for f in ./[0-4].txt ; do
# get file modified time (in seconds)
thedate="$(stat --printf='%Y\n' "$f")"
echo "$thedate" >> "$f"
done
If you want a human readable time format change %Y by %y:
thedate="$(stat --printf='%y\n' "$f")"

shell - faster alternative to "find"

I'm writing a shell script wich should output the oldest file in a directory.
This directory is on a remote server and has (worst case) between 1000 and 1500 (temporary) files in it. I have no access to the server and I have no influence on how the files are stored. The server is connect through a stable but not very fast line.
The result of my script is passed to a monitoring system wich in turn allerts the staff if there are too many (=unprocessed) files in the directory.
Unfortunately the monitoring system only allows a maximun execution time of 30 seconds for my script before a timeout occurs.
This wasn't a problem when testing with small directories, this wasn't a problem. Testing with the target directory over the remote-mounted directory (approx 1000 files) it is.
So I'm looking for the fastest way to get things like "the oldest / newest / largest / smallest" file in a directory (not recursive) without using 'find' or sorting the output of 'ls'.
Currently I'm using this statement in my sh script:
old)
# return oldest file (age in seconds)
oldest=`find $2 -maxdepth 1 -type f | xargs ls -tr | head -1`
timestamp=`stat -f %B $oldest`
curdate=`date +%s`
echo `expr $(($curdate-$timestamp))`
;;
and I tried this one:
gfind /livedrive/669/iwt.save -type f -printf "%T# %P\n" | sort -nr | tail -1 | cut -d' ' -f 2-
wich are two of many variants of statements one can find using google.
Additional information:
I'writing this on a FreeBSD Box with sh und bash installed. I have full access to the box and can install programs if needed. For reference: gfind is the GNU-"find" utuility as known from linux as FreeBSD has another "find" installed by default.
any help is appreciated
with kind regards,
dura-zell
For the oldest/newest file issue, you can use -t option to ls which sorts the output using the time modified.
-t Sort by descending time modified (most recently modified first).
If two files have the same modification timestamp, sort their
names in ascending lexicographical order. The -r option reverses
both of these sort orders.
For the size issue, you can use -S to sort file by size.
-S Sort by size (largest file first) before sorting the operands in
lexicographical order.
Notice that for both cases, -r will reverse the order of the output.
-r Reverse the order of the sort.
Those options are available on FreeBSD and Linux; and must be pretty common in most implementations of ls.
Let use know if it's fast enough.
In general, you shouldn't be parsing the output of ls. In this case, it's just acting as a wrapper around stat anyway, so you may as well just call stat on each file, and use sort to get the oldest.
old) now=$(date +%s)
read name timestamp < <(stat -f "%N %B" "$2"/* | sort -k2,2n)
echo $(( $now - $timestamp ))
The above is concise, but doesn't distinguish between regular files and directories in the glob. If that is necessary, stick with find, but use a different form of -exec to minimize the number of calls to stat:
old ) now=$(date +%s)
read name timestamp < <(find "$2" -maxdepth 1 -type f -exec stat -f "%N %B" '{}' + | sort -k2,2n)
echo $(( $now - $timestamp ))
(Neither approach works if a filename contains a newline, although since you aren't using the filename in your example anyway, you can avoid that problem by dropping %N from the format and just sorting the timestamps numerically. For example:
read timestamp < <(stat -f %B "$2"/* | sort -n)
# or
read timestamp < <(find "$2" -maxdepth 1 -type f -exec stat -f %B '{}' + | sort -n)
)
Can you try creating a shell script that will reside in the remote host and when executed will provide the required output. Then from your local machine just use ssh or something like that to run that. In this way the script will run locally there. Just a thought :-)

Resources