How to rename a file with a unique file name python - python-3.x

I'm trying to save a .pdf file with a unique name grabbed from an sqlite database using python code.
For example, I have a file named "boringfile.pdf" but need to rename this file to a unique employee name, "max_steel.pdf"
I've looked at os.rename, os.join, uuid, pdfrw, and reportlabs for a solution.
Can anyone help me?
EDIT:
For further clarification, the database is created using sqlite3 using python 3.6. The example database would look similar to this:
employee_id first_name last_name
123456 Frank Sinatra
738323 Johnny Cash
842028 Bon Jovi
So let's say I just created Bon Jovi's employee in-processing paperwork by extracting data from the database and inputting it into my report. My report currently has the label "boringfile.pdf" but I need it to read "bon_jovi.pdf" when the completed file is saved.

Not sure why you are using SQLite to create a CSV file and then parse the CSV file. That seems somewhat superfluous. You could do it directly like this (no error checking, just the concept):
import sqlite3
# Connect to database
db = sqlite3.connect('sample.db')
# Extract person's full name
sql = "SELECT * FROM profiles WHERE last_name=? OR first_name=? OR employee_id=?"
c = db.cursor()
data = c.execute(sql, ('Jovi','Bon','842028'))
# Rename PDF
for row in c:
print('{0} : {1}, {2}'.format(row[0], row[1], row[2]))
PDFname = row[1] + '_' + row[0] + '.pdf'
os.rename('boringfile.pdf', PDFname)

I found a solution after consulting various websites and calling a few people. After creating a sqlite3 database I call the information I want and convert it into a one line .csv. I use csv.reader to read the data and use os.rename to generate the unique name, very similar to Mark's suggestion above.
def dataexport(self, *args, **kwargs):
sql = "SELECT * FROM profiles WHERE last_name=? OR first_name=? OR employee_id=?"
data = c.execute(sql, (self.last_se.get(), self.first_se.get(),
self.employeeid_se.get(), )) #this part is meant to grab
#the user input from a gui I made to make form filling
#simple
with open('output.csv', 'w') as f:
writer = csv.writer(f)
writer.writerows(data)
def renamefile(self, *args, **kwargs):
mycsv = csv.reader(open('output.csv'))
for row in mycsv:
employee_id = row[0]
first_name = row[1]
last_name = row[2]
os.rename('boringfile.pdf', first_name + '_' + last_name + '.pdf')
output.csv
After running the first function, the .csv file should be one line:
842028, Bon, Jovi

Related

Python 'for loop' will not export all records

I have a Python program that executes an Oracle stored procedure. The SP creates a temp table and then the Python program queries that table and writes the data to an XML file with formatting.
Forgive the noob question, but for some reason the for loop that I'm using to export to the XML file does not export all records. If I limit the query that creates the XML to 15 rows, it works and creates the file. For any value above 15, the program completes, but the file isn't created.
However, this isn't always consistent. If I do multiple runs for 15, 16, or 17 rows, I get a file. But if I try 20, no file is created. No errors, just no file.
This was the initial code. The 'sql' runs against an Oracle private temp table and formats the XML:
cursor.execute(sql)
rows = cursor.fetchall()
with open(filename, 'a') as f:
f.write('<ROWSET>')
for row in rows:
f.write(" ".join(row))
f.write('</ROWSET>')
cursor.close()
Then I changed it to this, but again, no file is created:
cursor.execute(sql)
with open(filename, 'a') as f:
f.write('<ROWSET>')
while True:
rows = cursor.fechmany(15)
for row in rows:
f.write(" ".join(row))
f.write('</ROWSET>')
cursor.close()
I've run the 'free' command and reviewed it with my DBA, and it doesn't appear to be a memory issue. The typical size of the output table is about 600 rows. The table itself has 36 columns.
The indentation may not look right the way I've pasted it here, but the program does work. I just need a way to export all rows. Any insight would be greatly appreciated.
I'm on a Linux box using Python 3.8.5.
Here is the query (minus proprietary information) that is executed against the temp table in the cursor.execute(sql):
SELECT XMLELEMENT("ROW",
XMLFOREST(
carrier_cd,
prscrbr_full_name,
prscrbr_first_name,
prscrbr_last_name,
d_phys_mstr_id,
prscrbr_id,
prscrbr_addr_line_1,
prscrbr_addr_line_2,
prscrbr_city,
prscrbr_state_cd,
prscrbr_zip,
specialty_cd_1,
specialty,
unique_patient_reviewed,
patient_count_db_oral,
patient_count_cv_aa,
patient_count_cv_lipo,
PDC_DIABETES,
PDC_HTN,
PDC_STATINS,
Rating_Diabetes,
Rating_HTN,
Rating_Statins,
PDC_DIABETES,
PDC_HTN,
PDC_STATINS,
M_PC_DB_ORAL,
M_PC_CV_AA,
M_PC_CV_LIPO,
M_PDC_DIABETES,
M_PDC_HTN,
M_PDC_STATINS
),
XMLAGG
(
XMLFOREST(
case when carrier_hq_cd is not null
then XMLConcat(
XMLELEMENT("PATIENT_ID", patient_id),
XMLELEMENT("PATIENT_NAME", patient_name),
XMLELEMENT("DOB", dob),
XMLELEMENT("PHONE_NO", phone_no),
XMLELEMENT("MEMBER_PDC_DIABETES", MEMBER_PDC_DIABETES),
XMLELEMENT("MEMBER_PDC_HTN", MEMBER_PDC_HTN),
XMLELEMENT("MEMBER_PDC_STATINS", MEMBER_PDC_STATINS)
)
end "PATIENT_INFO"
)
ORDER BY patient_id
)
)XMLOUT
FROM ORA$PTT_QCARD_TEMP
GROUP BY
carrier_cd,
prscrbr_full_name,
prscrbr_first_name,
prscrbr_last_name,
d_phys_mstr_id,
prscrbr_id,
prscrbr_addr_line_1,
prscrbr_addr_line_2,
prscrbr_city,
prscrbr_state_cd,
prscrbr_zip,
specialty_cd_1,
specialty,
unique_patient_reviewed,
patient_count_db_oral,
patient_count_cv_aa,
patient_count_cv_lipo,
PDC_Diabetes,
PDC_HTN,
PDC_Statins,
Rating_Diabetes,
Rating_HTN,
Rating_Statins,
M_PC_DB_ORAL,
M_PC_CV_AA,
M_PC_CV_LIPO,
M_PDC_DIABETES,
M_PDC_HTN,
M_PDC_STATINS
If I could, I'd give #Axe319 credit as his idea that it was a database problem was correct. For some reason, Python didn't like that long XML query, so I incorporated it into the stored procedure. Then, the Python was like this:
# SQL query for XML data.
sql_out = """select * from DATA_OUT"""
cursor.execute(sql_out)
columns = [i[0] for i in cursor.description]
allRows = cursor.fetchall()
# Open the file for writing and write the first row.
xmlFile = open(filename, 'w')
xmlFile.write('<ROWSET>')
# Loop through the allRows data set and write it to the file.
for rows in allRows:
columnNumber = 0
for column in columns:
data = rows[columnNumber]
if data == None:
data = ''
xmlFile.write('%s' % (data))
columnNumber += 1
# Write the final row and close the file.
xmlFile.write('</ROWSET>')
xmlFile.close()

Python program, inserting txt file to sqlite3 database

Currently working on a program in Python that has to take data from a text file and input it into appropriate place in SQLite. I have created my database and the columns, now I am stuck on how I process the text data in and read it into the sqlite database.
Here are a couple lines from text file.
Kernel version: Windows 10 Enterprise, Multiprocessor Free
Product type: Professional
Product version: 6.3
Service pack: 0
Here is what I have so far,
import sqlite3
conn = sqlite3.connect('systeminfo.db')
c = conn.cursor()
def create_table():
c.execute("""CREATE TABLE IF NOT EXISTS system_information (
Machine_Name text,
Kernel_version text,
Product_type text,
product_version text,
Registered_organization text,
registered_owner text,
system_root text,
processors text,
physical_memory text
)""")
create_table1()
This creates my database and my table just how I want it, now I am stuck on taking the for example Kernel version from text file and putting the "Windows 10 Enterprise" into the database under the Kernel_Version Column.
UPDATE:
After using #zedfoxus tips, I was able to successfully get data, here is what I have, now how can I do the next lines more efficient? I am using elif, getting errors,
def insert_data(psinfo):
with open(psinfo) as f:
file_data = f.readlines()
for item in file_data:
if 'Kernel version' in item:
info = item.strip().split(':')
val = info[1].strip().split(',')
elif 'Product type' in item:
info = item.strip().split(':')
val = info[1].strip().split(',')
c.execute(
'INSERT INTO system_information (Kernel_version,Product_type ) values(?,?)',
(val[1].strip(),)
)
conn.commit()
Let's say you have a file called kernel.txt that contains
Kernel version: Windows 10 Enterprise, Multiprocessor Free
Product type: Professional
Product version: 6.3
Service pack: 0
Your python code would just have to read that text file and insert data into SQLite like so:
import sqlite3
conn = sqlite3.connect('systeminfo.db')
c = conn.cursor()
def create_table():
# same thing you had...just removing it for brevity
def insert_data(filename):
# read all the lines of the file
with open(filename) as f:
file_data = f.readlines()
# if Kernel version exists in the line, split the line by :
# take the 2nd item from the split and split it again by ,
# take the first item and pass it to the insert query
# don't forget to commit changes
for item in file_data:
if 'Kernel version' in item:
info = item.strip().split(':')
val = info[1].strip().split(',')
c.execute(
'insert into system_information (Kernel_version) values(?)',
(val[0].strip(),)
)
conn.commit()
create_table()
insert_data('kernel.txt')
You will have to change this code if you have multiple files containing such information, or if you have a single file containing multiple blocks of similar information. This code will get you started, though.
Update
I have separated the data parsing into its own function that I can call multiple times. Note how I have created 3 variables to store additional information like product type and version. The insert execution is happening outside of the loop. We are, basically, collecting all information we need and then inserting in one shot.
import sqlite3
conn = sqlite3.connect('systeminfo.db')
c = conn.cursor()
def create_table():
# same thing you had...just removing it for brevity
pass
def get_value(item):
info = item.strip().split(':')
val = info[1].strip().split(',')
return val[0].strip()
def insert_data(filename):
# read all the lines of the file
with open(filename) as f:
file_data = f.readlines()
# if Kernel version exists in the line, split the line by :
# take the 2nd item from the split and split it again by ,
# take the first item and pass it to the insert query
# don't forget to commit changes
kernel_version = ''
product_type = ''
product_version = ''
for item in file_data:
if 'Kernel version' in item:
kernel_version = get_value(item)
elif 'Product type' in item:
product_type = get_value(item)
elif 'Product version' in item:
product_version = get_value(item)
c.execute(
'''insert into system_information
(Kernel_version, Product_type, Product_version)
values(?, ?, ?)''',
(kernel_version, product_type, product_version,)
)
conn.commit()
create_table()
insert_data('kernel.txt')

Read data from csv into list of class objects - Python

I'm having trouble figuring this out. Basically I have a .csv file that has 7 employees with their first and last names, employee ID, dept #, and job title. My goal is for def readFile(employees) to accept an empty List (called employees), open the file for reading, and load all the employees from the file into a List of employee objects (employees). I already have my class built as:
class Employee:
def __init__(self, fname, lname, eid, dept, title):
self.__firstName = fname
self.__lastName = lname
self.__employeeID = int(eid)
self.__department = int(dept)
self.__title = title
I have a couple other class methods, but basically I don't quite understand how to properly load the file into a list of objects.
I was able to figure this out. I opened the file and then read a line from it, stripping the \n and splitting my data. I used a while loop to keep reading lines, as long as it wasn't an empty line, and appended it to my empty list. I also had to split the first indexed item as it was first and last name together in the same string and I needed them separate.
def readFile(employees):
with open("employees.csv", "r") as f:
line = f.readline().strip().split(",")
while line != ['']:
line = line[0].split(" ") + line[1:]
employees.append(Employee(line[0], line[1], line[2], line[3], line[4]))
line = f.readline().strip().split(",")
It most likely could be written better and more pythonic but it does what I need it to do.
Why don’t use pandas. So you define an employee pandas object and use their index for select each employee and the name of each column for select an specific employee attribute.

Using and input string to find a tuple in a text file

This is my code so far. I have tried to get an input from the user to search for a tuple of 3 parts in my source code. But I can not figure out how to allow the user to find the code through input and give them the tuple. If the the name variable is changed to 'Fred' it is found with ease but the code needs for the users input. All help is much appreciated.
def details():
countries = []
file = open('file.txt', mode='r', encoding='utf-8')
file.readline()
for line in file:
parts = line.strip().split(',')
country = Country(parts[0], parts[1], parts[2])
countries.append(country)
file.close()
with open('file.txt', encoding='utf=8') as countries:
for country in countries:
name=str(input('enter:'))
if str(name) in country:
print(country)

How to extract information from a text file that is located on a web page in python

I am a total beginner and I'm trying to do the following. I need to open a text file from a web page which contains a small list like that below.
name lastname M 0909
name lastname C 0909
name lastname F 0909
name lastname M 0909
name lastname M 0909
What I need to do is to count how many big M letters and how many big different letters there is(here is 3 M,F and C)and print it out. Then I need to create a new text file and transfer (only) all the names into it and save it on my hard drive. So far I only figured out how to open the list from web page.
import urllib.request
url = 'http://mypage.com/python/textfile.txt'
with urllib.request.urlopen(url) as myfile:
for i in myfile:
i = i.decode("ISO-8859-1")
print(i,end=" ")
But that is all I know. I tried using count() but it counts only one line at the time, it counts how many big M letters are in one line(1) but it does not add them together for the whole text(3). Any help would be appreciated, thank you.
I don't know exactly what you are doing, but try this:
import urllib.request
url = 'http://mypage.com/python/textfile.txt'
with urllib.request.urlopen(url) as myfile:
number_of_M = 0
set_of_big_letters = set()
for i in myfile:
i = i.decode("ISO-8859-1")
name, lastname, big_letter, _ = i.split(' ') # if they are seperated by space
set_of_big_letters.add(big_letter)
if big_letter == 'M':
number_of_M += 1
print(number_of_M)
print(len(set_of_big_letters))

Resources