I am trying to get the complete date of an image in JPG format. The format I want to get is 14:25:38 (Hour:Minutes:Seconds).
I tried with the commands:
$ stat -c %y DSC_0002.JPG | sed 's/^\([0-9\-]*\).*/\1/'
=> 2017-05-19 -Not that way
$ file DSC_0002.JPG
=> DSC_0002.JPG: JPEG image data, Exif standard: [TIFF image data, little-endian, direntries=11, manufacturer=NIKON CORPORATION, model=NIKON D5200, orientation=upper-left, xresolution=180, yresolution=188, resolutionunit=2, software=Ver.1.01 , datetime=2017:05:19 13:30:34, GPS-Data], baseline, precision 8, 6000x4000, frames 3
This last (file DSC_0002.JPG) command displays datetime=2017:05:19 13:30:34, but I need to get only 13:30:34
Preferably without using add-ons or programs external to Linux bash.
Thank you very much for any help.
My 2 cents...
As exif is a big part of my job and because I already spent a lot of time building my own scripts, time is comming to make a little bench.
Getting result of jhead using bash:
This use jhead because it's my prefered. See further to know why...
DateTime=$(jhead -nofinfo DSC_0002.JPG)
DateTime=${DateTime#*Date/Time*: }
DateTime=${DateTime%%$'\n'*}
This is the quickest way, (a lot quicker than using bash regex)!
echo $DateTime
2011:02:27 14:53:32
Different ways
Inquiring a jpeg file for knowing his date is possible by using at least 4 different tools:
file The magic file recognition command make some light inquiring on any file for determining is nature and print some more informations.
file DSC_0002.JPG
DSC_0002.JPG: JPEG image data, Exif standard: [TIFF image data, big-endian,
direntries=10, manufacturer=CANIKON, model=CANIKON AB12, orientation=upper-
left, xresolution=168, yresolution=176, resolutionunit=2, software=Ver.1.00
, datetime=2011:02:27 14:53:32], baseline, precision 8, 3872x2592, frames 3
file print size, resolution and dateTime from a jpeg file.
jhead is a jpeg's header dedicated tool:
jhead DSC_0002.JPG
File name : DSC_0002.JPG
File size : 4940925 bytes
File date : 2011:02:27 14:53:32
Camera make : CANIKON
Camera model : CANIKON AB12
Date/Time : 2011:02:27 14:53:32
Resolution : 3872 x 2592
Flash used : No
Focal length : 55.0mm (35mm equivalent: 82mm)
Exposure time: 0.0080 s (1/125)
Aperture : f/5.6
ISO equiv. : 180
Whitebalance : Auto
Metering Mode: pattern
JPEG Quality : 97
identify is part of ImageMagick package wich is a kind of all purpose tool for bitmap images... (Due to some security bugs in past and overall perfomance, I'ts not my personal choice).
identify DSC_0002.JPG
DSC_0002.JPG JPEG 3872x2592 3872x2592+0+0 8-bit sRGB 4.941MB 0.010u 0:00.020
identify -format "%[EXIF:DateTime]\n" DSC_0002.JPG
2011:02:27 14:53:32
exiftool is a dedicated tool using libimage.
exiftool DSC_0002.JPG
ExifTool Version Number : 9.74
File Name : DSC_0002.JPG
Directory : .
File Size : 4.7 MB
File Modification Date/Time : 2011:02:27 14:53:32+01:00
File Access Date/Time : 2017:06:05 08:40:26+02:00
File Inode Change Date/Time : 2017:06:05 08:40:04+02:00
File Permissions : rw-r--r--
File Type : JPEG
MIME Type : image/jpeg
Exif Byte Order : Big-endian (Motorola, MM)
...
Modify Date : 2011:02:27 14:53:32
...
Thumbnail Image : (Binary data 8965 bytes, use -b option to extract)
Circle Of Confusion : 0.020 mm
Depth Of Field : 15.07 m (8.58 - 23.65)
Field Of View : 24.7 deg (5.50 m)
Focal Length : 55.0 mm (35 mm equivalent: 82.0 mm)
Hyperfocal Distance : 26.80 m
Light Value : 11.1
The default output of this is 185 lines length, I've dropped a lot.
pure bash As requested and because Toby Speight recommend to not doing this:
FDate=;while IFS= read -d $'\0' -n 1024 raw;do
[ "$raw" ] && \
[ -z "${raw#[0-9][0-9][0-9][0-9]:[0-9][0-9]:[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]}" ] &&
FDate=$raw && break
done <DSC_0002.JPG
echo $FDate
2011:02:27 14:53:32
Ok, this a far away a perfect function: first date found in file is considered, regardless of fieldname.
Little bench
As the goal is to extract date time part of exif infos, the bench would only do this.
time for i in {1..100};do ... done >/dev/null
export TIMEFORMAT="R:%4R u:%4U s:%4S"
Like this (from quicker to slower):
# quicker: jhead
time for i in {1..100};do jhead DSC_0002.JPG ;done >/dev/null
R:0.115 u:0.000 s:0.028
# 2nd: file
time for i in {1..100};do file DSC_0002.JPG; done >/dev/null
R:0.226 u:0.000 s:0.044
# 3nd: pure bash
time for i in {1..100};do
while IFS= read -d $'\0' -n 1024 raw ;do
[ "$raw" ] &&
[ -z "${raw#[0-9][0-9][0-9][0-9]:[0-9][0-9]:[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]}" ] &&
ftim=$raw && break
done <DSC_0002.JPG
done >/dev/null
R:0.393 u:0.380 s:0.012
# 4nd: best dedicated: exiftool
time for i in {1..100};do exiftool -time:CreateDate DSC_0002.JPG ;done >/dev/null
R:14.921 u:13.064 s:0.956
# slower: imagemagick's identify
time for i in {1..100};do identify -format "%[EXIF:DateTime]\n" DSC_0002.JPG ;done >/dev/null
R:21.609 u:15.712 s:5.060
Sorry, I agree with Toby Speight: Doing this in pure bash is a not so good idea!
Setting file date
Doing date manipulation is very easy under pure bash, but working on file date. For this, I asked my system to set all photos files datetime to creation date found in exif.
For this, exiftool offer a specific syntax:
exiftool '-FileModifyDate<DateTimeOriginal /path/to/myphotos
and jhead too:
jhead -q -ft /path/to/myphotos/*.JPG
This will set all creation file date based on creation date in exif infos. Once done, you could use standard tools for filesystem inquiry:
ls -l DSC_0002.JPG
stat DSC_0002.JPG
find /path/to/myphotos -type f -name 'DSC*JPG' -mtime +400 -mtime -410 -ls
And so on...
Not all images have metadata, but those who have them can get them in the following ways:
identify -format "%[EXIF:DateTime]\n" image.jpg | awk '{print $2}
As said Toby Speight:
exiftool -time:CreateDate -a -G0:1 -s image.jpg | awk '{print $5}'
and
jhead image.jpg | awk '/^Date\/Time/{print $4}'
I'm sure there are other options, but I did not practice them
You can use cut command to parse output.
In your case stat gives output
2017-05-05 06:12:37.228033281 -0500
So to get desire output you can use stat -c %y popen.c | cut -f2 -d' ' | cut -f1 -d'.'
Refer: man cut
Related
I would like to list the files (ideally with an md5sum) within a directory and subdirectories in Ubuntu and output the results to a csv file. I would like the output to be in the following format.
File Name, File Path, File Size (bytes), Created Date Time (dd/mm/yyyy hh:mm:ss), Modified Date Time (dd/mm/yyyy hh:mm:ss), md5sum
I have played around with the ls command but can seem to get the output correct. Is there a better way to do this?
Thanks
Create the following script that outputs a CSV line for a given filepath argument:
#!/bin/bash
set -eu
filepath=$1
qfilepath=${filepath//\\/\\\\} # Quote backslashes.
qfilepath=${qfilepath//\"/\\\"} # Quote doublequotes.
file=${qfilepath##*/} # Remove the path.
stats=($(stat -c "%s %W %Y" "$filepath"))
size=${stats[0]}
ctime=$(date --date #"${stats[1]}" +'%d/%m/%Y %H:%M:%S')
mtime=$(date --date #"${stats[2]}" +'%d/%m/%Y %H:%M:%S')
md5=$(md5sum < "$filepath")
md5=${md5%% *} # Remove the dash.
printf '"%s","%s",%s,%s,%s,%s\n' \
"$file" "$qfilepath" "$size" "$ctime" "$mtime" $md5
Now call it with
find /path/to/dir -type f -exec ~/csvline.sh {} \;
Note that the creation time is often not supported by the file system.
I have 4000 .jpeg image files to which I want to add Latitude & Longitude using exiftool. I have a text file having :
First Column = Image filenames serial-wise from 1 to 4000
Second Column = Latitude
Third Column = Longitude
How do I add longitudes and latitudes to images with some script ?
I have not tried the script, so there is obviously some scope for optimization. For the time being you can try this.
Considering file named coordinates have the columns as described in the question, I have written this script accordingly.
#!/bin/bash
# Loop for every line in the coordinates file
cat coordinates | while IFS= read -r line
do
# Get Image name from Ist, Latitude from 2nd & Longitude from 3rd column.
IMAGE=`echo "$line" | awk '{print $1}'`
LAT=`echo "$line" | awk '{print $2}'`
LONG=`echo "$line" | awk '{print $3}'`
# Assign variables values into "exiftool" command
exiftool -exif:gpslatitude="$LAT" -exif:gpslatituderef=S -exif:gpdlongitude="$LONG" -exif:gpslongituderef=E "$IMAGE"
done
Some points to consider :
If exif tags don't work for you, you may use XMP tags. In that case the command-line would be like this :
exiftool -XMP:GPSLatitude="$LAT" -XMP:GPSLongitude="$LONG" -GPSLatitudeRef="South" -GPSLongitudeRef="East" "$IMAGE"
If you don't have References values for GPSLatitudeRef & GPSLongitudeRef positions, just use -P, and It should run fine :
exiftool -XMP:GPSLatitude="$LAT" -XMP:GPSLongitude="$LONG" -P "$IMAGE"
-P option preserves and overwrites the values for tags passed to it, ignoring the rest of tags that are also needed.
If you wish to add more or less tags, please refer to GPS Tags.
Feel free to add in more details.
I am really newbie in Linux(Fedora-20) and I am trying to learn basics
I have the following command
echo "`stat -c "The file "%n" was modified on ""%y" *Des*`"
This command returns me this output
The file Desktop was modified on 2014-11-01 18:23:29.410148517 +0000
I want to format it as this:
The file Desktop was modified on 2014-11-01 at 18:23
How can I do this?
You can't really do that with stat (unless you have a smart version of stat I'm not aware of).
With date
Very likely, your date is smart enough and handles the -r switch.
date -r Desktop +"The file Desktop was modified on %F at %R"
Because of your glob, you'll need a loop to handle all files that match *Des* (in Bash):
shopt -s nullglob
for file in *Des*; do
date -r "$file" +"The file ${file//%/%%} was modified on %F at %R"
done
With find
Very likely your find has a rich -printf option:
find . -maxdepth 1 -name '*Des*' -printf 'The file %f was modified on %TY-%Tm-%Td at %TH:%TM\n'
I want to use stat
(because your date doesn't handle the -r switch, you don't want to use find or just because you like using as most tools as possible to impress your little sister). Well, in that case, the safest thing to do is:
date -d "#$(stat -c '%Y' Desktop)" +"The file Desktop was modified on %F at %R"
and with your glob requirement (in Bash):
shopt -s nullglob
for file in *Des*; do
date -d "#$(stat -c '%Y' -- "$file")" +"The file ${file//%/%%} was modified on %F at %R"
done
stat -c "The file "%n" was modified on ""%y" *Des* | awk 'BEGIN{OFS=" "}{for(i=1;i<=7;++i)printf("%s ",$i)}{print "at " substr($8,0,6)}'
I have use here awk modify your code. what i have done in this code, from field 1,7 i printed it using for loop, i need to modify field 8, so i used substr to extract 1st 5 character.
Wouldn't it be great if there with a command for GNU/Linux that would do the following:
Open -Recursive *.png -Not-Case-Sensitive if exported-to-jpg#100%quality=less bytes than the original png then write jpg and delete the png
It would also be able to do the inverse of that command:
if png=less bytes than jpg then delete jpg
Looking for the One True Command is not going to help: if it existed, it would only be useful for you and the (presumably) small set of people who had exactly your needs in the future.
The UNIX Way is to link together several commands to do what you want. For example:
"open-recursive": feed files into the hopper using "find", eg find /path -type f -name '*.png' -print and then send the list out through a pipe.
"not case-sensitive": either increase the scope of the find (-o) or get find to dump out all the files and then use grep to look for what you want, eg find . -print | grep -i '.png'
"if-exported-to-jpg": this is slightly tricky because I believe that the only way to check if the conversion saves bytes is to actually convert it and see. You can use the convert tool from the ImageMagick package to do this. ImageMagick has been standard in the big name distros for years so should be easy to find.
"if less bytes than": straightforward to do in the shell or your favorite scripting language - Perl, python etc.
The net is that you build up what you want from these smaller pieces and you should be able to do what you want now and have something that you can modify in the future or share with others for their unique needs. That is the UNIX Way. Ommmm :)
Some time ago, I wrote a script to convert my photos. The script reduces the dimensions of all JPG file in current folder if any width or height is greater than MAX (default = 1024), keeping aspect ratio, and put them in a different folder (created). I hope this help you.
#!/bin/bash
if [ ! -d reduced ]
then
mkdir reduced
fi
if [ $# -lt 1 ]
then
MAX=1024
else
MAX=$1
fi
for obj in *.jpg
do
echo "------> File: $obj"
tam=$(expr `identify -format "%b" "$obj" | tr -d "B"` / 1024)
width=$(identify -format "%w" "$obj")
height=$(identify -format "%h" "$obj")
echo -e "\tDimensions: $width x $height px"
echo -e "\tFile size: $tam kB"
if [ $width -gt $height ] && [ $width -gt $MAX ]
then
convert "$obj" -resize $MAX "reduced/$obj.jpg"
cd reduced
mv "$obj".jpg "${obj%.jpg}".jpg;
tam=$(expr `identify -format "%b" "$obj" | tr -d "B"` / 1024)
width=$(identify -format "%w" "$obj")
height=$(identify -format "%h" "$obj")
echo -e "\tNew dimensions: $width x $height px"
echo -e "\tNew file size: $tam kB"
cd ..
echo -e "\tOk!"
elif [ $height -gt $MAX ]
then
convert "$obj" -resize x$MAX "reduced/$obj.jpg"
cd reduced
mv "$obj".jpg "${obj%.jpg}".jpg;
tam=$(expr `identify -format "%b" "$obj" | tr -d "B"` / 1024)
width=$(identify -format "%w" "$obj")
height=$(identify -format "%h" "$obj")
echo -e "\tNew dimensions: $width x $height px"
echo -e "\tNew file size: $tam kB"
cd ..
echo -e "\tOk!"
else
cp "$obj" reduced/
echo -e "\tDo not modify!"
fi
done
Err, in answer to your question - "No, it probably wouldn't".
Firstly, PNG files can support transparency and JPEGs cannot, so if this was scripted to your specification, you could lose countless hours of work that went into creating transparent masks for thousands of images.
Secondly, PNG files do not support EXIF/IPTC data, so you would also lose all your Copyright, camera and lens settings, GPS data, dates, and oodles of other metadata.
Thirdly, your PNG file may contain 16 bits per channel, whereas JPEG can only store 8 bits per channel so you could potentially lose an awful lot of fine colour gradations by moving from PNG to JPEG.
Fourthly, you could potentially lose compatibility with older Web browsers which had spotty support for PNGs.
I have a bunch of files which contain an ascii header with a time stamp WITHIN the file, followed by a large chunck of binary data. I would like to list the files sorted by this time stamp, at the command line (bash, etc).
The file headers look similar to the following:
encoding: raw
endian: big
dimension: 4
sizes: 128 128 1 4
spacings: 1.0 1.0 1.0 NaN
position: -3164,-13678
date_time: 06.02.12.18:59
user_name: Operator1
sample_name:
dwell_time: 4.000
count_time: 65.536
duration: 202.000
raster: 79912
pixel_width: 624.3125
pixel_height: 624.3125
....binary data....
I would like to sort based on the "date_time" time stamp, which uses the format dd.mm.yy.hh:mm
The sort --key option looks promising but all my attempts have failed. Any help is much appreciated. Thanks.
Assuming these files are images, so you can use a tool like exiftool to rename them based on their creation dates, then sort them by name.
If you can't rename them, just dump the names with the creation date to STDOUT and sort that, e.g.:
exiftool -p '$dateTimeOriginal $filename' -q -f DIRECTORY/WHERE/IMAGES/ARE | sort -n
If you only want the filenames in the output, append a | cut -f 2 -d " " to the end.
If it's file format which is not recognized by exiftool this might or might not work:
for f in YOURFILES* ; do
filedate=`grep --binary-file=text -i -o 'date_time: ...........:..' $f | head -1`
echo "$filedate $f"
done | sort -n
Note: this won't work when there are spaces in the filenames (and I'm leaving that to you to resolve). And if you want to output only the sorted filenames, append | awk '{print $NF}' after the sort -n.