I need to extract a set of pages from a pdf that contains several sets. Such conjunctions are distinguished by submissions. Inside the pdf has the following information ...
1 - Set of 3 shipments
Page: 1/continued
Page: 2/continued
Page: 3/last
2 - Set of 2 shipments
Page: 1/continued
Page: 2/last
2 - Set of 1 shipping
Page 1/1
This is to speed up my service, since I have to separate these sets manually.
from PyPDF2 import PdfFileWriter, PdfFileReader
import re
output = PdfFileWriter()
input1 = PdfFileReader(open("pdf_teste.PDF", "rb"))
totalPages = input1.getNumPages()
print ("total pages to process:" +str(totalPages))
for i in range(totalPages):
p = i
print ("processing page %s" %str(i))
output.addPage(input1.getPage(p))
p = input1.getPage(p).extractText()#extract text to search for identifier
pr = re.search("Diretor", p)#search for the identifier; to be replaced with a list
#if there's a match, do work
if pr:
outputStream = open("test"+str(i)+".pdf", "wb")
output.write(outputStream)
outputStream.close()
print ('match on page %s' %str(i))
print ('\n')
This code almost does what I want.
He divides the first set, but from the second it repeats the first set and the second set. But I want a pdf for each set.
Related
I am taking data extracted from multiple pdfs that were merged into one pdf.
The data is based on clinical measurements taken from a sample at different time points. Some time points have certain measurement values while others are missing.
So far, I've been able to merge the pdfs, extract the text and specific data from the text, but I want to put it all into a corresponding excel table.
Below is my current code:
import PyPDF2
from PyPDF2 import PdfFileMerger
from glob import glob
#merge all pdf files in current directory
def pdf_merge():
merger = PdfFileMerger()
allpdfs = [a for a in glob("*.pdf")]
[merger.append(pdf) for pdf in allpdfs]
with open("Merged_pdfs1.pdf", "wb") as new_file:
merger.write(new_file)
if __name__ == "__main__":
pdf_merge()
#scan pdf
text =""
with open ("Merged_pdfs1.pdf", "rb") as pdf_file, open("sample.txt", "w") as text_file:
read_pdf = PyPDF2.PdfFileReader(pdf_file)
number_of_pages = read_pdf.getNumPages()
for page_number in range(0, number_of_pages):
page = read_pdf.getPage(page_number)
text += page.extractText()
text_file.write(text)
#turn text script into list, separated by newlines
def Convert(text):
li = list(text.split("\n"))
return li
li = Convert(text)
filelines = []
for line in li:
filelines.append(line)
print(filelines)
#extract data from text and put into dictionary
full_data = []
test_data = {"Sample":[], "Timepoint":[],"Phosphat (mmol/l)":[], "Bilirubin, total (µmol/l)":[],
"Bilirubin, direkt (µmol/l)":[], "Protein (g/l)":[], "Albumin (g/l)":[],
"AST (U/l)":[], "ALT (U/l)":[], "ALP (U/l)":[], "GGT (U/l)":[], "IL-6 (ng/l)":[]}
for line2 in filelines:
# For each data item, extract it from the line and strip whitespace
if line2.startswith("Phosphat"):
test_data["Phosphat (mmol/l)"].append(line2.split(" ")[-2].strip())
if line2.startswith("Bilirubin,total"):
test_data["Bilirubin, total (µmol/l)"].append(line2.split(" ")[-2].strip())
if line2.startswith("Bilirubin,direkt"):
test_data["Bilirubin, direkt (µmol/l)"].append(line2.split(" ")[-4].strip())
if line2.startswith("Protein "):
test_data["Protein (g/l)"].append( line2.split(" ")[-2].strip())
if line2.startswith("Albumin"):
test_data["Albumin (g/l)"].append(line2.split(" ")[-2].strip())
if line2.startswith("AST"):
test_data["AST (U/l)"].append(line2.split(" ")[-2].strip())
if line2.startswith("ALT"):
test_data["ALT (U/l)"].append(line2.split(" ")[-4].strip())
if line2.startswith("Alk."):
test_data["ALP (U/l)"].append(line2.split(" ")[-2].strip())
if line2.startswith("GGT"):
test_data["GGT (U/l)"].append(line2.split(" ")[-4].strip())
if line2.startswith("Interleukin-6"):
test_data["IL-6 (ng/l)"].append(line2.split(" ")[-4].strip())
for sampnum in range(100):
num = str(sampnum)
sampletype = "T" and "H"
if line2.startswith(sampletype+num):
sample = sampletype+num
test_data["Sample"]=sample
for time in range(0,360):
timepoint = str(time) + "h"
word_list = list(line2.split(" "))
for word in word_list:
if word == timepoint:
test_data["Timepoint"].append(word)
full_data.append(test_data)
import pandas as pd
df = pd.DataFrame(full_data)
df.to_excel("IKC4.xlsx", sheet_name="IKC", index=False)
print(df)
The issue is I'm wondering how to move the individual items in the list to their own cells in excel, with the proper timepoint, since they dont necessarily correspond to the right timepoint. For example, timepoint 1 and 3 can have protein measurements, whereas timepoint 2 is missing this info, but timepoint 3 measurements are found at position 2 in the list and will likely be in the wrong row for an excel table.
I figured maybe I need to make an alternative dictionary for the timepoints, and attach the corresponding measurements to the proper timepoint. I'm starting to get confused though on how to do all this and am now asking for help!
Thanks in advance :)
I tried doing an "else" argument after every if argument to add a "-" if there if a measurement wasnt present for that timepoint, but I got far too many dashes since it iterates through the lines of the entire pdf.
I am using PyPDF4 to create an offline-readable version of the journal "Nature".
I use PyPDF4 PdfFileReader to read the individual article PDFs and PdfFileWriter to create a single, merged ouput.
The problem that I am trying to solve is that the page numbers of some issues do not start at 1, for example, issue 7805 starts with page 563.
How do I specify the desired /PageLabels in the document catalog?
for pdf_file in pdf_files:
input_pdf = PdfFileReader(open(pdf_file, 'rb'))
page_indices = file_page_dictionary[pdf_file]
for page_index in page_indices:
page = input_pdf.getPage(page_index)
# Specify actual page number here:
# page.setPageNumber(actual_page_numbers[page_index])
output.addPage(page)
with open(pdf_output_name, 'wb') as f:
output.write(f)
After exploring the PDF standard and a bit of hacking, I found that the following function will add a single PageLabels entry that creates page lables starting from offset (i.e. the first page will be labelled the offset, the second page, offset+1, etc.).
# output_pdf is an instance of PdfFileWriter().
# offset is the desired page offset.
def add_pagelabels(output_pdf, offset):
number_type = PDF.DictionaryObject()
number_type.update({PDF.NameObject("/S"):PDF.NameObject("/D")})
number_type.update({PDF.NameObject("/St"):PDF.NumberObject(offset)})
nums_array = PDF.ArrayObject()
nums_array.append(PDF.NumberObject(0)) # physical page index
nums_array.append(number_type)
page_numbers = PDF.DictionaryObject()
page_numbers.update({PDF.NameObject("/Nums"):nums_array})
page_labels = PDF.DictionaryObject()
page_labels.update({PDF.NameObject("/PageLabels"): page_numbers})
root_obj = output_pdf._root_object
root_obj.update(page_labels)
Additional page label entries can be created (i.e. with different offsets or different numbering styles).
Note that the first PDF page has an index of 0.
# Use PyPDF to manipulate pages
from PyPDF4 import PdfFileWriter, PdfFileReader
# To manipulate the PDF dictionary
import PyPDF4.pdf as PDF
def pdf_pagelabels_roman():
number_type = PDF.DictionaryObject()
number_type.update({PDF.NameObject("/S"):PDF.NameObject("/r")})
return number_type
def pdf_pagelabels_decimal():
number_type = PDF.DictionaryObject()
number_type.update({PDF.NameObject("/S"):PDF.NameObject("/D")})
return number_type
def pdf_pagelabels_decimal_with_offset(offset):
number_type = pdf_pagelabels_decimal()
number_type.update({PDF.NameObject("/St"):PDF.NumberObject(offset)})
return number_type
...
nums_array = PDF.ArrayObject()
# Each entry consists of an index followed by a page label...
nums_array.append(PDF.NumberObject(0)) # Page 0:
nums_array.append(pdf_pagelabels_roman()) # Roman numerals
# Each entry consists of an index followed by a page label...
nums_array.append(PDF.NumberObject(1)) # Page 1 -- 10:
nums_array.append(pdf_pagelabels_decimal_with_offset(first_offset)) # Decimal numbers, with Offset
# Each entry consists of an index followed by a page label...
nums_array.append(PDF.NumberObject(10)) # Page 11 --> :
nums_array.append(pdf_pagelabels_decimal_with_offset(second_offset))
page_numbers = PDF.DictionaryObject()
page_numbers.update({PDF.NameObject("/Nums"):nums_array})
page_labels = PDF.DictionaryObject()
page_labels.update({PDF.NameObject("/PageLabels"): page_numbers})
root_obj = output._root_object
root_obj.update(page_labels)
Noob, trying to build a word counter, to count the words displayed on a website. I found some code (counting words inside a webpage), modified it, tried it on Google, and found that it was way off. Other code I tried displayed all of the various HTML tags, which was likewise not helpful. If visible page content reads: "Hello there world," I'm looking for a count of 3. For now, I'm not concerned with words that are in image files (pictures). My modified code is as follows:
import requests
from bs4 import BeautifulSoup
from collections import Counter
from string import punctuation
# Page you want to count words from
page = "https://google.com"
# Get the page
r = requests.get(page)
soup = BeautifulSoup(r.content)
# We get the words within paragrphs
text_p = (''.join(s.findAll(text=True))for s in soup.findAll('p'))
# creates a dictionary of words and frequency from paragraphs
content_paras = Counter((x.rstrip(punctuation).lower() for y in text_p for x in y.split()))
sum_of_paras = sum(content_paras.values())
# We get the words within divs
text_div = (''.join(s.findAll(text=True))for s in soup.findAll('div'))
content_div = Counter((x.rstrip(punctuation).lower() for y in text_div for x in y.split()))
sum_of_divs = sum(content_div.values())
words_on_page = sum_of_paras + sum_of_divs
print(words_on_page)
As always, simple answers I can follow are appreciated over complex/elegant ones I cannot, b/c Noob.
I was trying to complete Udacity's Lesson 11, on vectorisation of text, yesterday. I went through the code and it all appeared to work fine - I take some emails, open them up, remove some signature words and return the stemmed words of each email into a list.
Here's loop 1:
for name, from_person in [("sara", from_sara), ("chris", from_chris)]:
for path in from_person:
### only look at first 200 emails when developing
### once everything is working, remove this line to run over full dataset
# temp_counter += 1
if temp_counter < 200:
path = os.path.join('/xxx', path[:-1])
email = open(path, "r")
### use parseOutText to extract the text from the opened email
email_stemmed = parseOutText(email)
### use str.replace() to remove any instances of the words
### ["sara", "shackleton", "chris", "germani"]
email_stemmed.replace("sara","")
email_stemmed.replace("shackleton","")
email_stemmed.replace("chris","")
email_stemmed.replace("germani","")
### append the text to word_data
word_data.append(email_stemmed.replace('\n', ' ').strip())
### append a 0 to from_data if email is from Sara, and 1 if email is from Chris
if from_person == "sara":
from_data.append(0)
elif from_person == "chris":
from_data.append(1)
email.close()
Here's loop 2:
for name, from_person in [("sara", from_sara), ("chris", from_chris)]:
for path in from_person:
### only look at first 200 emails when developing
### once everything is working, remove this line to run over full dataset
# temp_counter += 1
if temp_counter < 200:
path = os.path.join('/xxx', path[:-1])
email = open(path, "r")
### use parseOutText to extract the text from the opened email
stemmed_email = parseOutText(email)
### use str.replace() to remove any instances of the words
### ["sara", "shackleton", "chris", "germani"]
signature_words = ["sara", "shackleton", "chris", "germani"]
for each_word in signature_words:
stemmed_email = stemmed_email.replace(each_word, '') #careful here, dont use another variable, I did and broke my head to solve it
### append the text to word_data
word_data.append(stemmed_email)
### append a 0 to from_data if email is from Sara, and 1 if email is from Chris
if name == "sara":
from_data.append(0)
else: # its chris
from_data.append(1)
email.close()
The next part of the code works as intended:
print("emails processed")
from_sara.close()
from_chris.close()
pickle.dump( word_data, open("/xxx/your_word_data.pkl", "wb") )
pickle.dump( from_data, open("xxx/your_email_authors.pkl", "wb") )
print("Answer to Lesson 11 quiz 19: ")
print(word_data[152])
### in Part 4, do TfIdf vectorization here
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import stop_words
print("SKLearn has this many Stop Words: ")
print(len(stop_words.ENGLISH_STOP_WORDS))
vectorizer = TfidfVectorizer(stop_words="english", lowercase=True)
vectorizer.fit_transform(word_data)
feature_names = vectorizer.get_feature_names()
print('Number of different words: ')
print(len(feature_names))
But when I calculate the total number of words with loop 1, I get the wrong result. When I do it with loop 2, I get the correct result.
I've been looking at this code for far too long and I can't spot the difference - what did I do wrong in loop 1?
For the record, the wrong answer I kept getting was 38825. The correct answer should be 38757.
Many thanks for your help, kind stranger!
These lines don't do anything:
email_stemmed.replace("sara","")
email_stemmed.replace("shackleton","")
email_stemmed.replace("chris","")
email_stemmed.replace("germani","")
replace returns a new string and doesn't modify email_stemmed. Instead you should be setting the return value to email_stemmed:
email_stemmed = email_stemmed.replace("sara", "")
So on and so forth.
Loop two does actually set the return value in the for loop:
for each_word in signature_words:
stemmed_email = stemmed_email.replace(each_word, '')
The code snippets from above are not equivalent in that at the end of the first snippet email_stemmed is entirely unchanged due to replace being used in correctly, while in the end of the second one stemmed_email has actually been stripped of each word.
Hello Community Members,
I want to extract all the text from an e-book with .pdf as the file extension. I came to know that python has a package PyPDF2 to do the necessary action. Somehow, I have tried and able to extract text but it results in inappropriate space between the extracted words, sometimes the results is the result of 2-3 merged words.
Further, I want to extract the text from page 3 onward, as the initial pages deals with the cover page and preface. Also, I don't want to include the last 5 pages as it contains the glossary and index.
Does there exist any other way to read a .pdf binary file with NO ENCRYPTION?
The code snippet, whatever I have tried up to now is as follows.
import PyPDF2
def Read():
pdfFileObj = open('book1.pdf','rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
#discerning the number of pages will allow us to parse through all #the pages
num_pages = pdfReader.numPages
count = 0
global text
text = []
while(count < num_pages):
pageObj = pdfReader.getPage(count)
count +=1
text += pageObj.extractText().split()
print(text)
Read()
This is a possible solution:
import PyPDF2
def Read(startPage, endPage):
global text
text = []
cleanText = ""
pdfFileObj = open('myTest2.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
while startPage <= endPage:
pageObj = pdfReader.getPage(startPage)
text += pageObj.extractText()
startPage += 1
pdfFileObj.close()
for myWord in text:
if myWord != '\n':
cleanText += myWord
text = cleanText.split()
print(text)
Read(0,0)
Read() parameters --> Read(first page to read, last page to read)
Note: To read the first page starts from 0 not from 1 (as for example in an array).