Just looking for guidance or even a general outline on approach here.
I am using azure search to OCR a batch of pdfs. I have turned on hit highlighting and I am successfully getting results back there that I am looping through / displaying in my view for the end user. I was looking on expanding that functionality to show the pdf images with the highlighting on the images themselves like in the JFK azure example. I am not proficient in react and seem to be getting lost there.
I am assuming I need to save off the OCR images to a data store for reference using the normalized_images that are created? I do have pdfs locally I can load but assume the OCR images maybe different. Have turned on GeneratedNormalizedImagesPerPage and turned on cache which creates files in my storage account.
Then I assume I need to pull the associated image, display it, use the highlight results and pull a corresponding bounding box where the phrase was detected? Problem with that approach is that I do not see any association between the highlight hit and the location (bounding box) of the hit nor the associated image file the hit was on.
Probably way off on approach here but any guidance is appreciated.
Edit 1
I did noticed the items on this page in the JFK example: https://github.com/microsoft/AzureSearch_JFK_Files/tree/master/JfkWebApiSkills/JfkWebApiSkills
Would trying to replicate the ImageStore (so those are stored in my storage account) and then the HocrGenerator (appears to handle points in a doc) into my skillset for my index be the approach?
There are a few steps here:
you need to save the layoutText from the OCR skill somewhere the UI can access it. The JFK Files demo converts it to a HOCR (to display in the UI) and saves it in index as a field in the index so that it is retrieved in the search results. HOCR isn't necessary and you may find it more efficient to store the layout in blobs using a knowlege store object projection.
save the extracted images into blob storage using a file projection into the knowledge store. Keep in mind that the images may be resized in the process and the coordinates will match the resized image saved to the store. If you want to map the coordinates to the original image see this.
At search time, map the highlight to the the metadata. You will find this code in the nodejs frontend, however it may be simpler to follow in the original demo by following the code here. Essentially you just find the first occurrence of the highlighted word in the metadata, display the associated image, and calculate the bounding region of the word.
Related
I need to extract table information from a billing copy using AWS Textract. It gives me almost perfect results every time but for some PDF document, it does not give me the table results of the second page.
code examples used: AWS Official Documentation
image(JPEG) of first page is
image(JPEG) of second page is
So, AWS gives me the first 20 entries output as CSV. But for the second page of the image the result of CSV is:
and most importantly, I found the same results in a similar type of PDFs which has 21 entries and one entry exists on the second page of PDF. I have already used PyPDF2 to merge pdf pages into one page but doesn't solve my problem. Is there any OpenCV tools do I need to use?
Please suggest to me any possible suggestions for these types of issues.
I am not sure if i am going to be able to describe this right but ill give it a go.
We are working on implementing Azure search. At the core level we have searchable PDF documents that we want the text of them added to the index so all of them are searchable.
The initial thought was to just submit that document to the index via the add document rest api. The thinking was that this would be the most simple and quickest path
to getting the text of that document into the index. We also considered using and indexer and just having all the Searchable PDF docs in a blob store and have the indexer
crawl those every 10-15 mins.
We also looked into (based on a recommendation) submitting a standalone JSON file with the text from the PDF in it. Submitting that to the index either via the same add document API or
placing that file in a blob store. Within the JSON document we would need to have document identifiers that provide the index with the location of the PDF so that when that text is found
via search, we can make that clickable and as a result open the PDF.
It seems to me that pushing in the json file with the document add api. Indexing that and when it is part of a search we can use the doc id to link back to it and open it.
For those of you that have used Azure search. How did you implement?
If you're totally sure that only pdf will live on this particular index, then the first approach is faster to implement, since the native indexer can be used for extract the content of the pdf document as well to push it to the index.
Both approaches will work, but for the second one, you would need to extract the pdf yourself using an external tool.
Project Environment
The environment we are currently developing is using Windows 10. nodejs 10.16.0, express web framework. The actual environment being deployed is the Linux Ubuntu server and the rest is the same.
What technology do you want to implement?
The technology that I want to implement is the information that I entered when I joined the membership. For example, I want to automatically put it in the input text box using my name, age, address, phone number, etc. so that the user only needs to fill in the remaining information in the PDF. (PDF is on some of the webpages.)
If all the information is entered, the PDF is saved and the document is sent to another vendor, which is the end.
Current Problems
We looked at about four days for PDFs, and we tried to create PDFs when we implemented the outline, structure, and code, just like it was on this site at https://web.archive.org/web/20141010035745/http://gnupdf.org/Introduction_to_PDF
However, most PDFs seem to be compressed into flatDecode rather than this simple. So I also looked at Data extraction from /Filter /FlateDecode PDF stream in PHP and tried to decompress it using QPDF.
Unzip it for now.Well, I thought it would be easy to find out the difference compared to the PDF without Kim after putting it in the first name.
However, there is too much difference even though only three characters are added... And the PDF structure itself is more difficult and complex to proceed with.
Note : https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/PDF32000_2008.pdf (PDF official document in English)
Is there a way to solve the problem now?
It sounds like you want to create a PDF from scratch and possibly extract data from it and you are finding this a more difficult prospect than you first imagined.
Check out my answer here on why PDF creation and reading is non-trivial and why you should reach for a tool you help you do this:
https://stackoverflow.com/a/53357682/1669243
I'm trying to extract some entries from a PDF, but the bad formatting is making it inconvenient to simply parse through like a normal document. There isn't any consistent positioning for the text, so each entry is a unique scramble with no consistent pattern I can find. I only want the entry name and the info on the right, not the field name or description.
I've tried experimenting with headers and layout info using the PyPDF2 Module but there doesn't seem to be any metadata for the PDF besides basic author info.
My idea was using the Google Cloud Vision API to transcribe the text, but that brings up issues of auto-positioning.
Does anyone know of a better methodology for this, or if not, simply how to execute the positioning for the Cloud Vision API?
I am trying to implement a document management system using Sharepoint. One major issue is that colleagues cannot find documents in the current setup (local fileserver). They have asked that we have a system that scans uploaded documents and automatically looks for keywords in them and then populates a "Meta" column.
I have had sort of success with OCR on image files, but getting keywords out of office documents (doc, xls etc.) I have had no success until now.
Is there a way to setup a flow to do this task for me?
any help is much aprechiated.
i tried "Get file metadata" and Azure "Text analysis", but it seems to take the raw data of the files (XML I assume) and returns that the document is to large to analyse.
There is something vague about this requirement - how is a keyword defined in a document?
Therefore, first obvious solution would be to assign keywords for each file upon uploading it. You may create a process for this with flow - have tasks, reminders and so on.
Automating this with OCR first means that you need to user OCR that works with MS flow you have only one choice - ElasticOCR. Then, in your flow
- feed the document content to the ElasticOCR action
- keep in mind that OCR is not 100% accurate
- analyze the generated text content according to your keyword definition
- finally write the meta back to the library in the corresponding columns.
Having worked on a similar requirement, we asked uploaders to publish their documents with a short abstract(column from the content type). The assumption is the abstract contains the keywords and is stored in a multi-line column - making it searchable site wide.