I'm trying to convert this delimited PDF to an excel (or some other delimited format). Using Adobe Acrobat 9, I attempt to save it and copy it) as Excel but it gives the error message "BAD PDF; error in processing fonts. [348]".
I'm open to any solution that will create a delimited file, ranging from using Adobe Acrobat, to programming to using other apps. The only limitation is that I don't have a budget to buy anything (such as Able2Extract).
The way I was able to export my images and fonts without buying any extra software to do the conversion was this way. go to Advanced, PDG Optimizer, select all of the options you want on the LEFT COLUMN and where it says MAKE COMPATIBLE WITH select Acrobat 8.0 and later, OK....you are in your road to success
Note: not really an answer, but some suggestions.
Sounds to me that Crystal Reports is not following the PDF spec close enough.
I'd make sure CR is fully updated/patched and try genning another file making sure that "tagging" is enabled - tagging defines the layout structure. I don't have a copy of CR handy, but you may have to define a distiller template to use so when you print to PDF you can select that job option.
You can also tell its a bad PDF by using Preflight in Acrobat, it says there is no tag structure and you can do it manually (draw boxes around each item...). Also that there is no language set, and it is somehow compatible with Acrobat 1.3? which isn't supported anymore and should be 4 at the lowest?
Once you have a "good" pdf can export to xml/word and import that to excel. Also, with Acrobat 8+ you can highlight using the select tool, right-click and choose Open As SpreadSheet. You might be able to get away with just highlighting the whole document -- though I'd hope the xml approach would be best.
Able2Extract does some OCRing and tricky fuzzy logic not only to define tags/layout so it is exportable, but also avoids any font, encoding, etc issue - at least to my knowledge.
In the rare case that you can't get a new file, then exporting to plain text/accessible seems to generate a nice flat text file. You could write a vbscript to parse it (adding your delimiter) and import that into excel.
Related
I have a selection of pdfs that I want to text mine. I use tika to parse the text out of each pdf and save to a .txt with utf-8 encoding (I'm using windows)
Most of the pdfs were OCR'd before I got them but when I view the extracted text I have "pnÁnn¿¡c" instead of "Phádraig" if I view the PDF.
Is it possible for me to verify the text layer of the PDF (forgive me if thats the incorrect term) Ideally without needing the full version of Acrobat
It sounds like you are dealing with scanned books with "hidden OCR", ie. the PDF shows an image of the original document, behind which there is a layer of OCRed text.
That allows you to use the search function and to copy-paste text out of the document.
When you highlight the text, the hidden characters become visible (though this behaviour maybe depends on the viewer you use).
To be sure, you can copy-paste the highlighted text to a text editor.
This will allow you to tell if you are actually dealing with OCR quality this terrible, or if your extraction process caused mojibake.
Since OCR quality heavily depends on language resources (dictionaries, language model), I wouldn't be surprised if the output was actually that bad for a low-resource language like Gaelic (Old Irish?).
I've noticed that when I use an OCR to transform a scanned PDF document into text, in this case Adobe Acrobat Pro, I'm getting very different outputs depending on how I extract the data.
In the above photo - you can see a piece of a PDF that has been OCR'ed into fairly good quality text. If I select it in Adobe and copy it to say, a word or txt doc, it paste over perfectly fine.
However, if I export it using Adobe to Rich Text Format, use Python's PDFminer, or Python Apache Tika then I get the above photo which as you can see completely jumbles it. The extraction results are very consistent between the approaches - basically all 3 jumble it in the exact same way.
Would any of you have any idea as to why an OCR'd PDF can be copied just fine to a text editor but is extracting in such a bizarre way?
Thank you!
Regards,
Mano
So what ended up working for me was running the initial parsing with Apache-Tika and then, on the few that didn't work on, pass them through PyPDF2. My theory is that PyPDF2 uses a different mechanism for parsing that doesn't rely on the root of the PDF unlike Tika and that is what seems to have become corrupted in a few of these OCR'd docs.
Not sure of the initial cause but that was my solution.
I don't have much knowledge about VBA.
But I have a problem which I think can be solved with VBA.
I have a PDF file of 400 pages. I have an excel with page numbers and some text. Now I want this text to be copy pasted (Add Text under drawing markup in PDF tools) in the PDF.
I can do it manually but it will take 3 to 4 days. so can anybody help me and make my work easier. I wanted to do this in Excel-VBA.
I have 2013 Excel and Acrobat xi Pro.
It depends.
If the pdf has forms in it, you are of course able to fill them in a programmatic way.
If your document does not contain forms you are not going to be able to solve this problem in a trivial manner.
Why, I hear you ask?
PDF documents, despite their reputation are more like containers of instructions than they are a WYSIWYG format
instructions are bundled in groups called "objects"
objects can be compressed (DEFLATE) into streams
objects are indexed so they can be re-used (this is called the xref)
the index uses byte-offsets to get a grip on which object is where in the document
Now what would happen if you wanted to add a single character somewhere in the document
you would need to decode the streams to figure out where you're actually placing content
Once you've found the right stream, and you've inserted your character, you have also screwed up the xref table.
Nothing will work anymore
Here's something that's really irked me over the years. I've never used any software that, when importing data from a column-aligned text file, can figure out the column breaks in a correct manner.
Excel 2K3 and a lot of other Microsoft components that seem to share a common codebase (like the import options for SQL2K) attempt to figure out the column breaks for you. Unfortunately, they only look at the first n rows, and are often completely wrong.
OpenOffice.Org 3.1 has a import dialog almost exactly like Excel 2K3 but it doesn't even attempt to guess the column breaks for you. And the latest version of Numbers doesn't appear to handle column-aligned imports at all.
Obviously column-aligned data is undesirable for a number of reasons, but a lot of older software (particularly in-house software various companies have floating around) exports data in this format so I do need to handle it every so often. Surely, somewhere, SOME software imports it well without me coding an import utility myself or manually specifying where twelve zillion columns start and stop?
OSX, Windows, whatever. I'm open to suggestions. Ultimate goal is to get it into a SQL Server table, but simply getting it into a Excel/XML/tab-delimited/etc file in the meantime would be fine because it's easy enough to get into SQL Server from there.
I tend to normalize such data with awk -- perhaps generating a csv file -- before trying to import it into Excel.
See the awk user's manual.
I don't think there is a silver bullet for your request. I think the best you can hope for is to define your input format once and be able to reuse that format when you receive a file with the same format again.
As one poster mentioned you could use awk or, if .NET is more your thing, then you could use FileHelpers. It's an open source .NET library that does a good job reading and writing both Fixed length and delimited files. The downside is that you would be creating a .NET application to do the work (either inserting directly into a DB or perhaps creating an output file. On the plus side, once created, you could reuse the mapping classes again if you get the same file format.
Well obviously no software can be entirely correct in guessing the layout of a fixed column file, since there is no seperator (though variable width columns with higher maximum lengths will often produce enough space on the end to start guessing). For example the following could be anywhere from 1-9 columns (I have personally had to figure out some super packed fixed column layouts like this, only much longer)
135464876
647873159
345467575
If SQL Server is the ultimate destination, have you looked into the SQL Server import wizard?
Right click your database in Management Studio and select Tasks->Import Data. Proceed through and select "Flat File" as your data source. In the format dropdown change from Delimited to Fixed Width. On the left you can now use the Columns screen to draw the column seperators. There is also an advanced and preview screen.
Try out this demo (I was on development team):
Personator 4
Install, run the program, go to Tools | ASCII Conversion | Import from ASCII.
The import will be to DBF/FoxPro, but you can then export that file into one of the formats you mentioned.
The start/stop guesser uses a few statistical formulas to try to get the boundaries correct; you get to verify and/or correct with a graphical editor after analysis.
If you save your file as a text file and attempt to open
it in Microsoft Excel 2007 and select "Fixed Width",
Excel will "guess" where the breaks occur (based on
whitespace), but you can actually change where the column
field breaks will occur. The application has vertical lines that
can be moved left or right X characters. Excel
will "guess" where the breaks occur, but if it
guesses incorrectly, you can still change where the field breaks
should occur. On STEP 2 of the wizard, just move the
vertical lines to the left or right if you need
to change Excel's guesses as to where the field breaks
are. You can see which character number the field
break occurs in before importing.
I'm interested in using Office 2007 to convert between the pre-2007 binary formats (.doc, .xls, .ppt) and the new Office Open XML formats (.docx, .xlsx, .pptx)
How would I do this? I'd like to write a simple command line app that takes in two filenames (input and output) and perhaps the source and/or destination types, and performs the conversion.
Microsoft has a page which gives several examples of writing scripts to "drive" MS Word. One such example shows how to convert from a Word document to HTML. By changing the last parameter to any values listed here, you can get the output in different formats.
The easiest way would be to use Automation thru the Microsoft.Office.Interop. libraries. You can create an instance of a Word application, for example. There are methods attached to the Application object that will allow you to open and close documents, plus pretty much anything else you can accomplish in VBA by recording a macro.
You could also just write the VBA code in your Office application to do roughly the same thing. Both approaches are equally valid, depending on your comfort in programming in C#, VB.NET or VBA.