I am using the Tabula module in Python.
I am trying to output text from a PDF.
I am using this code:
pdf_read = tabula.read_pdf(
input_path = "Test File.pdf",
pages = start_page_number,
guess=False,
area=(81.735,18.55,391.285,273.61),
relative_area = False,
format="TSV",
output_path="testing_area.tsv"
)
When I go to run my code, it says "The output file is empty."
Any idea why this could be?
Edit: If I remove everything except the input_path and pages, my data is getting read into pdf_read correctly, it just does not output into an external file.
Something is wrong with this option...hmm...
Edit #2: I figured out why the area part was not working and now it is, but I still can't get this to output a file for some reason.
Edit #3: I tried looking at this: How to convert PDF to CSV with tabula-py?
But I keep getting an error message: "build_options() got an unexpected keyword argument 'spreadsheet'
Edit #4: I'm using the latest version of tabula.py, which doesn't have the spreadsheet option.
Still can't output a file with data though.
I don't know why that wasn't working above, so the output of pdf_read is a list.
I converted the list into a dataframe and then output the dataframe using to_csv.
Code is below:
import pandas as pd
df = pd.DataFrame(pdf_read,columns=["column_a"])
output_df = df.to_csv(
"alternative_attempt_1.txt",
header=True,
index=True,
sep='\t',
mode='w',
encoding="cp1252"
)
Related
The following code tries to edit part of text in a PDF file:
from PyPDF2 import PdfFileReader, PdfFileWriter
from PyPDF2.generic import DecodedStreamObject, EncodedStreamObject
in_file="input.pdf"
pdf = PdfFileReader(in_file)
#Just first page is subjected to be edited
page=pdf.pages[0]
contents=page["/Contents"]
#contents[1] is a IndirectObject of PyPDF2, so EncodedStreamObject can be obtained by get_object()
ogg=contents[1].get_object()
#obtaining byte datas
enc_data=ogg.get_data()
#decoding (in string) in order to be editable
dec_data=enc_data.decode('utf-8')
new_dec_data=dec_data.replace("old text string","new text string")
#returning to bytes format but with new text replaced
new_enc_data=new_dec_data.encode('utf-8')
#HERE is the problem !
#Looking in script lib i couldnt resolve the final step. setData() doesnt work as it should.
ogg.decodedSelf.setData( new_enc_data)
#print(ogg)
writer = PdfFileWriter()
writer.addPage(page)
with open("output.pdf", 'wb') as out_file:
writer.write(out_file)
Of course output.pdf corresponds to original input pdf file.
Just linking the interested object : https://fossies.org/dox/openslides-2.3-portable/classPyPDF2_1_1generic_1_1EncodedStreamObject.html
Has anyone else experienced the same problem ?
Maybe im not understanding actual issue.
Resolved from myself.
EncodedStreamObject's setData() doesn't prevent to edit its private attribute _data. So you can edit it externally.
ogg._data = new_enc_data
I want to check a YouTube video's views and keep track of them over time. I wrote a script that works great:
import requests
import re
import pandas as pd
from datetime import datetime
import time
def check_views(link):
todays_date = datetime.now().strftime('%d-%m')
now_time = datetime.now().strftime('%H:%M')
#get the site
r = requests.get(link)
text = r.text
tag = re.compile('\d+ views')
views = re.findall(tag,text)[0]
#get the digit number of views. It's returned in a list so I need to get that item out
cleaned_views=re.findall('\d+',views)[0]
print(cleaned_views)
#append to the df
df.loc[len(df)] = [todays_date, now_time, int(cleaned_views)]
#df = df.append([todays_date, now_time, int(cleaned_views)],axis=0)
df.to_csv('views.csv')
return df
df = pd.DataFrame(columns=['Date','Time','Views'])
while True:
df = check_views('https://www.youtube.com/watch?v=gPHgRp70H8o&t=3s')
time.sleep(1800)
But now I want to use this function for multiple links. I want a different CSV file for each link. So I made a dictionary:
link_dict = {'link1':'https://www.youtube.com/watch?v=gPHgRp70H8o&t=3s',
'link2':'https://www.youtube.com/watch?v=ZPrAKuOBWzw'}
#this makes it easy for each csv file to be named for the corresponding link
The loop then becomes:
for key, value in link_dict.items():
df = check_views(value)
That seems to work passing the value of the dict (link) into the function. Inside the function, I just made sure to load the correct csv file at the beginning:
#Existing csv files
df=pd.read_csv(k+'.csv')
But then I'm getting an error when I go to append a new row to the df (“cannot set a row with mismatched columns”). I don't get that since it works just fine as the code written above. This is the part giving me an error:
df.loc[len(df)] = [todays_date, now_time, int(cleaned_views)]
What am I missing here? It seems like a super messy way using this dictionary method (I only have 2 links I want to check but rather than just duplicate a function I wanted to experiment more). Any tips? Thanks!
Figured it out! The problem was that I was saving the df as a csv and then trying to read back that csv later. When I saved the csv, I didn't use index=False with df.to_csv() so there was an extra column! When I was just testing with the dictionary, I was just reusing the df and even though I was saving it to a csv, the script kept using the df to do the actual adding of rows.
I am trying to convert .pdf data to a spreadsheet. Based on some research, some guys recommended transforming it into csv first in order to avoid errors.
So, I made the below coding which is giving me:
"TypeError: cannot concatenate object of type ''; only Series and DataFrame objs are valid"
Error appears at 'pd.concat' command.
'''
import tabula
import pandas as pd
import glob
path = r'C:\Users\REC.AC'
all_files = glob.glob(path + "/*.pdf")
print (all_files)
df = pd.concat(tabula.read_pdf(f1) for f1 in all_files)
df.to_csv("output.csv", index = False)
'''
Since this might be a common issue, I am posting the solution I found.
"""
df = []
for f1 in all_files:
df = pd.concat(tabula.read_pdf(f1))
"""
I believe that breaking the item iteration in two parts would generate the dataframe it needed and therefore would work.
I have created a CSV file and it is currently empty. My code checks whether if the CSV file contains data or not. If it doesn't, it adds data to it. If it does, it doesn't do anything. This is what I tried so far:
import pandas as pd
df = pd.read_csv("file.csv")
if df.empty:
#code for adding in data
else:
pass #do nothing
But when implemented, I got the error:
pandas.errors.EmptyDataError: No columns to parse from file
Is there a better way to check if the CSV file is empty or not?
import pandas as pd
try:
#file.csv is an empty csv file
df=pd.read_csv('file.csv')
except pd.errors.EmptyDataError:
#Code to adding data
else:
pass #Do something
I am reading data from a perfectly valid xlsx file and processing it using Pandas in Python 3.5. At the end I am writing the final dataframe to an Excel file using :
writer = pd.ExcelWriter(os.path.join(DATA_DIR, 'Data.xlsx'),
engine='xlsxwriter', options={'strings_to_urls': False})
manual_labelling_data.to_excel(writer, 'Sheet_A', index=False)
writer.save()
While trying to open the Data.xlsx, I am getting the error : We found a problem with some content in 'Data.xlsx'... On proceeding the file loads into Excel with info : Removed Records: Formula from /xl/worksheets/sheet1.xml part
I cannot find out what the problem is.
Thanks a lot to #jmcnamara for the help in comment. The issue was that some strings in the data were wrongly being interpreted as formulas. The corrected code is :
options = {}
options['strings_to_formulas'] = False
options['strings_to_urls'] = False
writer = pd.ExcelWriter(os.path.join(DATA_DIR, 'Data.xlsx'),engine='xlsxwriter',options=options)
manual_labelling_data.to_excel(writer, 'Sheet_A', index=False)
writer.save()