linux-shell: renaming files to creation time - linux

Good morning everybody,
for a website I'd like to rename files(pictures) in a folder from "1.jpg, 2.jpg, 3.jpg ..." to "yyyymmdd_hhmmss.jpg" - so I'd like to read out the creation times an set this times as names for the pics. Does anybody have an idea how to do that for example with a linux-shell or with imagemagick?
Thank you!

Naming based on file system date
In the linux shell:
for f in *.jpg
do
mv -n "$f" "$(date -r "$f" +"%Y%m%d_%H%M%S").jpg"
done
Explanation:
for f in *.jpg
do
This starts the loop over all jpeg files. A feature of this is that it will work with all file names, even ones with spaces, tabs or other difficult characters in the names.
mv -n "$f" "$(date -r "$f" +"%Y%m%d_%H%M%S").jpg"
This renames the file. It uses the -r option which tells date to display the date of the file rather than the current date. The specification +"%Y%m%d_%H%M%S" tells date to format it as you specified.
The file name, $f, is placed in double quotes where ever it is used. This assures that odd file names will not cause errors.
The -n option to mv tells move never to overwrite an existing file.
done
This completes the loop.
For interactive use, you may prefer that the command is all on one line. In that case, use:
for f in *.jpg; do mv -n "$f" "$(date -r "$f" +"%Y%m%d_%H%M%S").jpg"; done
Naming based on EXIF Create Date
To name the file based on the EXIF Create Date (instead of the file system date), we need exiftool or equivalent:
for f in *.jpg
do
mv -n "$f" "$(exiftool -d "%Y%m%d_%H%M%S" -CreateDate "$f" | awk '{print $4".jpg"}')"
done
Explanation:
The above is quite similar to the commands for the file date but with the use of exiftool and awk to extract the EXIF image Create Date.
The exiftool command provides the date in a format like:
$ exiftool -d "%Y%m%d_%H%M%S" -CreateDate sample.jpg
Create Date : 20121027_181338
The actual date that we want is the fourth field in the output.
We pass the exiftool output to awk so that it can extract the field that we want:
awk '{print $4".jpg"}'
This selects the date field and also adds on the .jpg extension.

Thanks to #John1024 !
I needed to rename files with different extensions in the same time, according to last modification date :
for f in *; do
fn=$(basename "$f")
mv "$fn" "$(date -r "$f" +"%Y-%m-%d_%H-%M-%S")_$fn"
done
"DSC_0189.JPG" ➜ "2016-02-21_18-22-15_DSC_0189.JPG"
"MOV_0131.avi" ➜ "2016-01-01_20-30-31_MOV_0131.avi"
If you don't want to keep original filename :
mv "$fn" "$(date -r "$pathAndFileName" +"%Y-%m-%d_%H-%M-%S")"
Hope it helps noobs as me !

Try this
for file in `ls -1 *.jpg`; do name=`stat -c %y $file | awk -F"." '{ print $1 }' | sed -e "s/\-//g" -e "s/\://g" -e "s/[ ]/_/g"`.jpg; mv $file $name; done
Though there might be an easier way.

I created a shell script; I think it's mac only, linux might need other arguments.
#!/bin/bash
BASEDIR=$1;
for file in `ls -1 $BASEDIR`; do
TIMESTAMP=`stat -f "%B" $BASEDIR/$file`;
DATENAME=`date -r $TIMESTAMP +'%Y%m%d-%H%M%S'`-$file
mv -v $BASEDIR/$file $BASEDIR/$DATENAME;
done
when called with a directory path, moves all files in that directory to prepend the creation date of that file, like
../camera/P1210232.JPG -> ../camera/20220121-103456-P1210232.JPG

Change filename based on file creation time:
exiftool "-filename<FileCreateDate" -d %Y%m%d_%H%M%S%z%%-c.%%le input.jpg

Related

How to touch all files that are returned by a sorted ls?

If I have the following:
ls|sort -n
How would I touch all those files in the order of the sorted files? Something like:
ls|sort -n|touch
What would be the proper syntax? Note that I need to sort touch the files in the exact order they're being sorted -- as I'm trying to sort these files for a FAT reader with minimal metadata reading.
ls -1tr | while read file; do touch "$file"; sleep 1; done
If you want to preserve distance in modification time from one file to the next then call this instead:
upmodstamps() {
oldest_elapsed=$(( $(date +%s) - $(stat -c %Y "`ls -1tr|head -1`") ))
for file in *; do
oldstamp=$(stat -c %Y "$file")
newstamp=$(( $oldstamp + $oldest_elapsed ))
newstamp_fmt=$(date --date=#${newstamp} +'%Y%m%d%H%M.%S')
touch -t ${newstamp_fmt} "$file"
done
}
Note: date usage assumes GNU
You can use this command
(ls|sort -n >> list.txt )
touch $(cat list.txt)
OR
touch $(ls /path/to/dir | sort -n)
OR if you want to copy files instead of creating empty files use this command
cp list.txt ./DirectoryWhereYouWantToCopy
Try like this
touch $(ls | sort -n)
Can you give a few file name?
if you have file names with numbers as 1file, 10file, 11file .. 20file, then you need use --general-numeric-sort
ls | sort --general-numeric-sort --output=../workingDirectory/sortedFiles.txt
cat sortedFiles.txt
1file
10file
11file
12file
20file
and move sortedFile.txt into your working directory or where ever you want.
touch $(cat ../workingDirectory/sortedFiles.txt)
this will create empty files with the exact same name

Search, match and copy directories into another based on names in a txt file

My goal is copy a bulk of specific directories whose names are in a txt file as follows:
$ cat names.txt
raw1
raw2
raw3
raw4
raw5
These directories have subdirectories, hence it is important to copy all the contents. When I list in my terminal it looks like this:
$ ls -l
raw3
raw7
raw1
raw8
raw5
raw6
raw2
raw4
To perform this task, I have tried the following:
cat names.txt | while read line; do grep -l '$line' | xargs -r0 cp -t <desired_destination>; done
But, I get this mistake
cp: cannot stat No such file or directory
I suppose it's because the names in the file list (names.txt) don't match in sorting with the ones in the terminal. Notice that they are unsorted and by using while read line doesn't work. Thank you for taking the time and commitment to help me.
Having problems following the logic of the current code so in the name of K.I.S.S. I propose:
tgtdir=/my/target/directory
while read -r srcdir
do
[[ -d "${srcdir}" ]] && cp -rp "${srcdir}" "${tgtdir}"
done < <(tr -d '\r' < names.dat)
NOTES:
the < <(tr -d '\r' < names.dat) is used to remove windows/dos line endings from names.dat (per comments from OP); if names.dat is updated to remove the \r' then the tr -d with be a no-op (ie, bit of overhead to spawn the subprocess but the script should still read names.dat correctly)
assumes script is run from the directory where the source directories reside otherwise code can be modified to either cd to said directory or preface the ${srcdir} references with said directory
OP can add/modify the cp flags as needed, but I'm assuming at a minimum -r will be needed in order to recursively copy the directories
UUoC.
cat names.txt | while read line; do ...; done
is better written
while read line; do ...; done < names.txt
do grep -l '$LINE' | is eating your input.
printf "%s\n" 1 2 3 |while read line; do echo "Read: [$line]"; grep . | cat; done
Read: [1]
2
3
In your case, it is likely finding no lines that match the literal string $LINE which you have embedded in single-qote marks, which do not allow it to be parsed for content. Use "$line" (avoid capitals), and wouldn't be helpful even if it did match:
$: printf "%s\n" 1 2 3 | grep -l .
(standard input)
You didn't tell it what to read from, so -l is pointless since it's reading the same stdin stream that the read is.
I think what you want is a little simpler -
xargs cp -Rt /your/desired/target/directory/ < names.txt
Assuming you wanted to leave the originals where they were.

Linux batch copy files into directories based on filename pattern

I have a list of almost 500 pdf files with the following filename structure:
XXXX-YYYY-MM-DD.pdf
where XXXX is a variable lenght numeric code (1 to 4 digits) always delimitated by "-", for example:
51-2016-08-22.pdf
776-2016-08-22.pdf
3881-2016-08-22.pdf
4-2016-08-22.pdf
2860-2016-08-22.pdf
The goal is to copy each file into its own directory, naming the directories like the pattern (ie: file 776-2016-08-22.pdf goes to directory 776). How can I use awk or sed to delimitate the variable lenght field?
Here's my code:
for f in *.pdf
do
FOLDERNAME=`echo $f| awk (awk or sed missing code here)`
mkdir /my/dir/structure/$FOLDERNAME
cp $f /my/dir/structure/$FOLDERNAME/
done
Thanks for your support.
You can use:
for f in *.pdf; do
d="${f%%-*}"
mkdir -p "$d" && cp "$f" "$d"
done
As rightly pointed out by ed-morton, This is NOT recommended solution as it fails in many cases. Please follow https://stackoverflow.com/a/39089589/3834860
Keeping this answer for reference.
awk -F '-' to specify delimiter and '{print $1}' for first element before delimiter.
for f in *.pdf
do
FOLDERNAME=`echo $f| awk -F '-' '{print $1}'`
mkdir /my/dir/structure/$FOLDERNAME
cp $f /my/dir/structure/$FOLDERNAME/
done

Ordering a loop in bash

I've a bash script like this:
for d in /home/test/*
do
echo $d
done
Which ouputs this:
/home/test/newer dir
/home/test/oldest dir
I'd like to order the folders by creation time so that the 'oldest dir' directory appears first in the list. I've tried ls and tree variations to no avail.
For example,
for d in `ls -d -c -1 $PWD/*`
Returns:
/home/test/oldest
dir
/home/test/newer
dir
Very close, but it does not respect the space in the directory name. My question, how would I have oldest dir on top and support the whitespace?
ls -d -c $PWD/* | while read line
do echo "$line"
done
Another technique, kind of a Schwartzian transform:
stat -c $'%Z\t%n' /home/test/* | sort -n | cut -f2- |
while IFS= read -r filename; do
# ...
This solution is fragile with filenames containing newlines.

Command line sorcerery

I have a directory full of .xls files that I want to convert to .csv. I'm using xls2csv. This command only prints out the csv to the screen so I believe you have to do xls2csv (xls file) > (new file).csv. So for this I need to write a loop.
for f in `ls`; do xls2csv > `rev $f` | cut -d "." | rev | echo ".csv"
That's what I have so far and it doesn't work. I'm just hoping you can understand exactly what I want to do by the above example.
for f in *.xls; do
basename="${f%.xls}"
csvname="$basename.csv"
xls2csv "$f" > "$csvname"
done
[update] fixed the typo, so that $basename is actually used. Thanks.
GNU Parallel has a feature for this: {.} which is the original string but with the .extension removed:
ls | parallel xls2csv {} ">" {.}.csv
Plus you get the added bonus that xls2csv will be run in parallel if you have multiple CPUs. It also deals correctly with file names like:
My Brother's 12" records.xls
To learn more watch the intro video: http://www.youtube.com/watch?v=OpaiGYxkSuQ
for f in *; do
c=`echo $f | sed 's/.xls$/.csv/'`
xls2csv $f >$c
done
You should check out the basename command, using the -s switch.
(I think you're using rev to reverse the filename - is that right? I removed it.)
for f in `ls`; do
xls2csv $f > `basename -s xls $f`csv;
done
Try that. I don't know if xls2csv is destructive (like sed), so back up your directory.
Try this
type=".csv";
for f in `ls -1`;
do
file=`echo $f|cut -d '.' -f1`
file=${file}${type}
`xls2csv $f > $file`
done

Resources