I've written script that retrieves specific fields of exif data from thousands of images in a directory (including subdirectories) and saves the info to a csv file:
import os
from PIL import Image
from PIL.ExifTags import TAGS
import csv
from os.path import join
####SET THESE!###
imgpath = 'C:/x/y' #Path to folder of images
csvname = 'EXIF_data.csv' #Name of saved csv
###
def get_exif(fn):
ret = {}
i = Image.open(fn)
info = i._getexif()
for tag, value in info.items():
decoded = TAGS.get(tag, tag)
ret[decoded] = value
return ret
exif_list = []
path_list = []
filename_list = []
DTO_list = []
MN_list = []
for root, dirs, files in os.walk(imgpath, topdown=True):
for name in files:
if name.endswith('.JPG'):
pat = join(root, name)
pat.replace(os.sep,"/")
exif = get_exif(pat)
path_list.append(pat)
filename_list.append(name)
DTO_list.append(exif['DateTimeOriginal'])
MN_list.append(exif['MakerNote'])
zipped = zip(path_list, filename_list, DTO_list, MN_list)
with open(csvname, "w", newline='') as f:
writer = csv.writer(f)
writer.writerow(('Paths','Filenames','DateAndTime','MakerNotes'))
for row in zipped:
writer.writerow(row)
However, it is quite slow. I've attempted to optimise the script for performance + readabilty by using list and dictionary comprehensions.
import os
from os import walk #Necessary for recursive mode
from PIL import Image #Opens images and retrieves exif
from PIL.ExifTags import TAGS #Convert exif tags from digits to names
import csv #Write to csv
from os.path import join #Join directory and filename for path
####SET THESE!###
imgpath = 'C:/Users/au309263/Documents/imagesorting_testphotos/Finse/FINSE01' #Path to folder of images. The script searches subdirectories as well
csvname = 'PLC_Speedtest2.csv' #Name of saved csv
###
def get_exif(fn): #Defining a function that opens an image, retrieves the exif data, corrects the exif tags from digits to names and puts the data into a dictionary
i = Image.open(fn)
info = i._getexif()
ret = {TAGS.get(tag, tag): value for tag, value in info.items()}
return ret
Paths = [join(root, f).replace(os.sep,"/") for root, dirs, files in walk(imgpath, topdown=True) for f in files if f.endswith('.JPG' or '.jpg')] #Creates list of paths for images
Filenames = [f for root, dirs, files in walk(imgpath, topdown=True) for f in files if f.endswith('.JPG' or '.jpg')] #Creates list of filenames for images
ExifData = list(map(get_exif, Paths)) #Runs the get_exif function on each of the images specified in the Paths list. List converts the map-object to a list.
MakerNotes = [i['MakerNote'] for i in ExifData] #Creates list of MakerNotes from exif data for images
DateAndTime = [i['DateTimeOriginal'] for i in ExifData] #Creates list of Date and Time from exif data for images
zipped = zip(Paths, Filenames, DateAndTime, MakerNotes) #Combines the four lists to be written into a csv.
with open(csvname, "w", newline='') as f: #Writes a csv-file with the exif data
writer = csv.writer(f)
writer.writerow(('Paths','Filenames','DateAndTime','MakerNotes'))
for row in zipped:
writer.writerow(row)
However, this has not changed the performance.
I've timed the specific regions of the code and found that specifically opening each image and getting the exif data from each image in the get_exif function is what takes time.
To make the script faster,I'm wondering if:
1) It is possible to optimise on the performance of the function?, 2) It is possible to retrive exif data without opening the image?, 3) list(map(fn,x)) is the fastest way of applying the funtion?
If read the docs in the right way PIL.Image.open() does not only extracts the EXIF data from the file but also reads and decodes the entire image, which probably is the bottleneck here. The first thing I would do would be to change to a library or routine that only works on the EXIF data and does not care for the image content. ExifRead or piexif might be worth a try.
Related
I would like to sort 10000 of image files of the same group into a specific folder.
The information of all image files is in a metatdata file (c:/metadata.csv).
A metadata file (metadata.csv) is made like this;
**image_ID** **Group**
ISIC_0034267 nv
ISIC_0034266 nv
ISIC_0034265 nv
ISIC_0034264 nv
ISIC_0034263 mel
ISIC_0034262 mel
ISIC_0034261 nv
ISIC_0034260 nv
ISIC_0034259 bkl
ISIC_0034258 nv
ISIC_0034257 nv
ISIC_0034256 mel
ISIC_0034255 bcc
ISIC_0034254 nv
ISIC_0034253 mel
ISIC_0034252 bkl
. so on
And, all image files are named after "image_id" (ISIC_XXXXXXX.jpeg).
What I want is to sort these image files (ISIC_XXXXXXX.jpeg) according to the variable, "Group" (nv, mel, bkl,...) ! In a metadata file (HAM10000_metadata.csv), there are seven different values of "Group" (akiec, bcc, bk1, mel, df, vasc, nv).
Therefore, I want to put these 10,000 image files with a same value of "Group" into 7 different folders according to the metadata file which contains the macthed value of "Group" of every image file.
How can I do this task by Python?
(all files are located at c:/ and I would like make new seven folders which are named after of "Group")
I don't know how to make a script.
I start a script like this. But I can't figure out how to finish the script.
import pandas as pd
import shutil
import os
from shutil import move
meta_ham = pd.read_csv('/metadata.csv')
keyword = "meta_ham[image_id]"
from_folder = r"c:/"
to_folder = r"c:/???"
for i in os.listdir(from_folder):
if keyword in i:
move(os.path.join(from_folder, i), os.path.join(to_folder, i))
Simply make the folders as necessary:
# base folders / template destinations
from_folder = "c:/"
to_folder_base = "c:/images"
# read in CSV file with pandas
meta_ham = pd.read_csv('/metadata.csv')
# iterate through each row of csv
for index in range(meta_ham):
# get image name and corresponding group
img_name = meta_ham[index]['image_ID'] + ".jpeg"
keyword = meta_ham[index]['Group']
# make a folder for this group, if it doesn't already exist.
# as long as exist_ok is True, then makedirs will do nothing if it already exists
to_folder = os.path.join(to_folder_base, keyword)
os.makedirs(to_folder, exist_ok=True)
# move the image from its original location to this folder
old_img_path = os.path.join(from_folder, img_name)
new_img_path = os.path.join(to_folder, img_name)
move(old_img_path, new_img_path)
My intention is to convert the pdf strings into excel/csv file as follows:
PDF file: (Source File)
#_________________________________________________________________________
appliance
n. 1. See server appliance. 2. See information appliance. 3. A device with a single or limited ......
appliance server
n. 1. An inexpensive computing .....2. See server appliance.
application
n. A program designed ......
#________________________________________________________________________
Excel File : (Target File)
#________________________________________________________________________
appliance , n. , 1. See server appliance ,
appliance server , n. , 1. An inexpensive co ,
application , n. , A program designed ...... ,
_#_______________________________________________________________________
I have convert the pdf into text and trying to split with "," and then convert the text file into csv file. But i have stuck after converting the pdf to text file.
import os
from os import chdir, getcwd, listdir, path
import PyPDF2
from time import strftime
def check_path(prompt):
''' (str) -> str
Verifies if the provided absolute path does exist.
'''
abs_path = raw_input(prompt)
while path.exists(abs_path) != True:
print ("\nThe specified path does not exist.\n")
abs_path = raw_input(prompt)
return abs_path
print ("\n")
folder = check_path("Provide absolute path for the folder: ")
list=[]
directory=folder
for root,dirs,files in os.walk(directory):
for filename in files:
if filename.endswith('.pdf'):
t=os.path.join(directory,filename)
list.append(t)
m=len(list)
i=0
while i<=len(list):
path=list[i]
head,tail=os.path.split(path)
var="\\"
tail=tail.replace(".pdf",".txt")
name=head+var+tail
content = ""
# Load PDF into pyPDF
pdf = PyPDF2.PdfFileReader(filename(path, "rb"))
# Iterate pages
for i in range(0, pdf.getNumPages()):
# Extract text from page and add to content
content += pdf.getPage(i).extractText() + "\n"
print (strftime("%H:%M:%S"), " pdf -> txt ")
f=open(name,'w')
f.write(content.encode("UTF-8"))
f.close
It may be worth converting the PDF to CSV first, then manipulating the CSV to the layout you would like afterwards.
This API can be used with Python to convert one or multiple PDFs to CSV: https://pdftables.com/pdf-to-excel-api.
To convert a single PDF:
import pdftables_api
c = pdftables_api.Client('my-api-key')
c.xlsx('input.pdf', 'output.csv')
or to convert multiple PDFs:
import pdftables_api
import os
c = pdftables_api.Client('MY-API-KEY')
file_path = "C:\\Users\\MyName\\Documents\\PDFTablesCode\\"
for file in os.listdir(file_path):
if file.endswith(".pdf"):
c.xlsx(os.path.join(file_path,file), file+'.csv')
How do I apply a function to a list of file paths I have built, and write an output csv in the same path?
read file in a subfolder -> perform a function -> write file in the
subfolder -> go to next subfolder
#opened xml by filename
with open(r'XML_opsReport 100001.xml', encoding = "utf8") as fd:
Odict_parsedFromFilePath = xmltodict.parse(fd.read())
#func called in func below
def activity_to_df_one_day (list_activity_this_day):
ib_list = [pd.DataFrame(list_activity_this_day[i], columns=list_activity_this_day[i].keys()).drop("#uom") for i in range(len(list_activity_this_day))]
return pd.concat(ib_list)
#Processes parsed xml and writes csv
def activity_to_df_all_days (Odict_parsedFromFilePath, subdir): #writes csv from parsed xml after some processing
nodes_reports = Odict_parsedFromFilePath['opsReports']['opsReport']
list_activity = []
for i in range(len(nodes_reports)):
try:
df = activity_to_df_one_day(nodes_reports[i]['activity'])
list_activity.append(df)
except KeyError:
continue
opsReport = pd.concat(list_activity)
opsReport['dTimStart'] = pd.to_datetime(opsReport['dTimStart'], infer_datetime_format =True)
opsReport.sort_values('dTimStart', axis=0, ascending=True, inplace=True, kind='quicksort', na_position='last')
opsReport.to_csv("subdir\opsReport.csv") #write to the subdir
def scanfolder(): #fetches list of file-paths with desired starting name.
list_files = []
for path, dirs, files in os.walk(r'C:\..\xml_objects'): #directory containing several subfolders
for f in files:
if f.startswith('XML_opsReport'):
list_files.append(os.path.join(path, f))
return list_files
filepaths = scanfolder() #list of file-paths
Every function works well, the xml processing is good, so I am not sharing the xml structure. There are 100+ paths in filepaths , each a different subdirectory. I want to be able to apply above flow in future as well, where I can get filepaths and perform desired actions. It's important to write the csv file to it's sub directory.
To get the directory that a file is in, you can use:
import os
for root, dirs, files, in os.walk(some_dir):
for f in files:
print(root)
output_file = os.path.join(root, "output_file.csv")
print(output_file)
Is that what you're looking for?
Output:
somedir
somedir\output_file.csv
See also Python 3 - travel directory tree with limited recursion depth and Find current directory and file's directory.
Was able to solve with os.path.join.
exceptions_path_list =[]
for i in filepaths:
try:
with open(i, encoding = "utf8") as fd:
doc = xmltodict.parse(fd.read())
activity_to_df_all_days (doc, i)
except ValueError:
exceptions_path_list.append(os.path.dirname(i))
continue
def activity_to_df_all_days (Odict_parsedFromFilePath, filepath):
...
...
...
opsReport.to_csv(os.path.join(os.path.dirname(filepath), "opsReport.csv"))
I need to convert a folder with around 4,000 .txt files into a single .csv with two columns:
(1) Column 1: 'File Name' (as specified in the original folder);
(2) Column 2: 'Content' (which should contain all text present in the corresponding .txt file).
Here you can see some of the files I am working with.
The most similar question to mine here is this one (Combine a folder of text files into a CSV with each content in a cell) but I could not implement any of the solutions presented there.
The last one I tried was the Python code proposed in the aforementioned question by Nathaniel Verhaaren but I got the exact same error as the question's author (even after implementing some suggestions):
import os
import csv
dirpath = 'path_of_directory'
output = 'output_file.csv'
with open(output, 'w') as outfile:
csvout = csv.writer(outfile)
csvout.writerow(['FileName', 'Content'])
files = os.listdir(dirpath)
for filename in files:
with open(dirpath + '/' + filename) as afile:
csvout.writerow([filename, afile.read()])
afile.close()
outfile.close()
Other questions which seemed similar to mine (for example, Python: Parsing Multiple .txt Files into a Single .csv File?, Merging multiple .txt files into a csv, and Converting 1000 text files into a single csv file) do not solve this exact problem I presented (and I could not adapt the solutions presented to my case).
I had a similar requirement and so I wrote the following class
import os
import pathlib
import glob
import csv
from collections import defaultdict
class FileCsvExport:
"""Generate a CSV file containing the name and contents of all files found"""
def __init__(self, directory: str, output: str, header = None, file_mask = None, walk_sub_dirs = True, remove_file_extension = True):
self.directory = directory
self.output = output
self.header = header
self.pattern = '**/*' if walk_sub_dirs else '*'
if isinstance(file_mask, str):
self.pattern = self.pattern + file_mask
self.remove_file_extension = remove_file_extension
self.rows = 0
def export(self) -> bool:
"""Return True if the CSV was created"""
return self.__make(self.__generate_dict())
def __generate_dict(self) -> defaultdict:
"""Finds all files recursively based on the specified parameters and returns a defaultdict"""
csv_data = defaultdict(list)
for file_path in glob.glob(os.path.join(self.directory, self.pattern), recursive = True):
path = pathlib.Path(file_path)
if not path.is_file():
continue
content = self.__get_content(path)
name = path.stem if self.remove_file_extension else path.name
csv_data[name].append(content)
return csv_data
#staticmethod
def __get_content(file_path: str) -> str:
with open(file_path) as file_object:
return file_object.read()
def __make(self, csv_data: defaultdict) -> bool:
"""
Takes a defaultdict of {k, [v]} where k is the file name and v is a list of file contents.
Writes out these values to a CSV and returns True when complete.
"""
with open(self.output, 'w', newline = '') as csv_file:
writer = csv.writer(csv_file, quoting = csv.QUOTE_ALL)
if isinstance(self.header, list):
writer.writerow(self.header)
for key, values in csv_data.items():
for duplicate in values:
writer.writerow([key, duplicate])
self.rows = self.rows + 1
return True
Which can be used like so
...
myFiles = r'path/to/files/'
outputFile = r'path/to/output.csv'
exporter = FileCsvExport(directory = myFiles, output = outputFile, header = ['File Name', 'Content'], file_mask = '.txt')
if exporter.export():
print(f"Export complete. Total rows: {exporter.rows}.")
In my example directory, this returns
Export complete. Total rows: 6.
Note: rows does not count the header if present
This generated the following CSV file:
"File Name","Content"
"Test1","This is from Test1"
"Test2","This is from Test2"
"Test3","This is from Test3"
"Test4","This is from Test4"
"Test5","This is from Test5"
"Test5","This is in a sub-directory"
Optional parameters:
header: Takes a list of strings that will be written as the first line in the CSV. Default None.
file_mask: Takes a string that can be used to specify the file type; for example, .txt will cause it to only match .txt files. Default None.
walk_sub_dirs: If set to False, it will not search in sub-directories. Default True.
remove_file_extension: If set to False, it will cause the file name to be written with the file extension included; for example, File.txt instead of just File. Default True.
How to start creating my own filetype in Python ? I have a design in mind but how to pack my data into a file with a specific format ?
For example I would like my fileformat to be a mix of an archive ( like other format such as zip, apk, jar, etc etc, they are basically all archives ) with some room for packed files, plus a section of the file containing settings and serialized data that will not be accessed by an archive-manager application.
My requirement for this is about doing all this with the default modules for Cpython, without external modules.
I know that this can be long to explain and do, but I can't see how to start this in Python 3.x with Cpython.
Try this:
from zipfile import ZipFile
import json
data = json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
with ZipFile('foo.filetype', 'w') as myzip:
myzip.writestr('digest.json', data)
The file is now a zip archive with a json file (thats easy to read in again in many lannguages) for data you can add files to the archive with myzip write or writestr. You can read data back with:
with ZipFile('foo.filetype', 'r') as myzip:
json_data_read = myzip.read('digest.json')
newdata = json.loads(json_data_read)
Edit: you can append arbitrary data to the file with:
f = open('foo.filetype', 'a')
f.write(data)
f.close()
this works for winrar but python can no longer process the zipfile.
Use this:
import base64
import gzip
import ast
def save(data):
data = "[{}]".format(data).encode()
data = base64.b64encode(data)
return gzip.compress(data)
def load(data):
data = gzip.decompress(data)
data = base64.b64decode(data)
return ast.literal_eval(data.decode())[0]
How to use this with file:
open(filename, "wb").write(save(data)) # save data
data = load(open(filename, "rb").read()) # load data
This might look like this is able to be open with archive program
but it cannot because it is base64 encoded and they have to decode it to access it.
Also you can store any type of variable in it!
example:
open(filename, "wb").write(save({"foo": "bar"})) # dict
open(filename, "wb").write(save("foo bar")) # string
open(filename, "wb").write(save(b"foo bar")) # bytes
# there's more you can store!
This may not be appropriate for your question but I think this may help you.
I have a similar problem faced... but end up with some thing like creating a zip file and then renamed the zip file format to my custom file format... But it can be opened with the winRar.