Azure Form Recognizer Table Not Being Properly Extracted - azure

I am using https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/curl-train-extract to build a training model without using Labels.
The problem I am running into is when I run a file through the model (the file was used to train the model), it is not picking up the "table" part. What I mean is, there is no "tables" node.
From what I have seen, it should be able to build this as part of the JSON, but its breaking it down into super granular OCR, such as
{
"key": {
"text": "__Tokens__34",
"boundingBox": null,
"elements": null
},
"value": {
"text": "2 X 3/4",
"boundingBox": [
3.1181,
3.7292,
3.5278,
3.7292,
3.5278,
3.8583,
3.1181,
3.8583
],
"elements": null
},
"confidence": 1.0
}
Am I missing a flag or something?
Thank you in advance.

Seems like the table is not detected automatically with Train without labels, can you please share an image of the table, please remove any PII information. You can also try the Train with labels or the Layout API to see if it recognizes the table automatically.

I had the same problem, but I've noticed it working when I enable Full Text

Related

Fine Tuning an OpenAI GPT-3 model on a collection of documents

According to the documentation https://beta.openai.com/docs/guides/fine-tuning the training data to fine tune an OpenAI GPT3 model should be structured as follows:
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
I have a collection of documents from an internal knowledge base that have been preprocessed into a JSONL file in a format like this:
{ "id": 0, "name": "Article Name", "description": "Article Description", "created_at": "timestamp", "updated_at": "timestamp", "answer": { "body_txt": "An internal knowledge base article with body text", }, "author": { "name": "First Last"}, "keywords": [], "url": "A URL to internal knowledge base"}
{ "id": 1, "name": "Article Name", "description": "Article Description", "created_at": "timestamp", "updated_at": "timestamp", "answer": { "body_txt": "An internal knowledge base article with body text", }, "author": { "name": "First Last"}, "keywords": [], "url": "A URL to internal knowledge base"}
{ "id": 2, "name": "Article Name", "description": "Article Description", "created_at": "timestamp", "updated_at": "timestamp", "answer": { "body_txt": "An internal knowledge base article with body text", }, "author": { "name": "First Last"}, "keywords": [], "url": "A URL to internal knowledge base"}
The documentation then suggests that a model could then be fine tuned on these articles using the command openai api fine_tunes.create -t <TRAIN_FILE_ID_OR_PATH> -m <BASE_MODEL>.
Running this results in:
Error: Expected file to have JSONL format with prompt/completion keys. Missing prompt key on line 1. (HTTP status code: 400)
Which isn't unexpected given the documented file structure noted above. Indeed if I run openai tools fine_tunes.prepare_data -f training-data.jsonl then I am told:
Your file contains 490 prompt-completion pairs
ERROR in necessary_column validator: prompt column/key is missing. Please make sure you name your columns/keys appropriately, then retry`
Is this is right approach to trying to fine tune a GTP3 model on collections of documents, such that questions could later be asked about the content of them. What would one put in the prompt and completion fields in this case since I am not starting from a place where I have a collection of possible question and ideal answers.
Have I fundamentally misunderstood the mechanism used to fine tune a GTP3 model? It does make sense to me that GTP3 would need to be trained on possible questions and answers. However, given the base models are already trained and this process is more above providing additional datasets which aren't in the public domain so that questions can be asked about it I would have thought what I want to achieve is possible. As a working example, I can indeed go to https://chat.openai.com/ and ask a question about these documents as follows:
Given the following document:
[Paste the text content of one of the documents]
Can you tell me XXX
And indeed it often gets the answer right. What I'm now trying to do it fine tune the model on ~500 of these documents such that one doesn't have to paste whole single documents each time a question is to be asked and such that the model might even be able to consider content across all ~500 rather than just the single one that user provided.
Fine-tuning is a process of modifying a pre-trained machine learning model to suit the needs of a particular task. It is not done to provide the model with an internal knowledge-base. Instead of fine-tuning the model, you can create a database of embeddings for chunks of data from the knowledge-base. This database can then be used to semantically search for the most relevant information in response to a query. When a query is received, the database can be searched to find the chunk(s) of data that is most similar to the query. This information can then be fed to GPT-3 to provide answers from. With this approach you can easily update the knowledge by adding new chunks of data to the database.

How can I include the "source" field from the Microsoft Azure Custom Question Answer API?

I am leveraging the latest iteration of Azure's Custom question answering module in language studio in an external app that I've created, and I cannot figure out how to receive the actual source when the question is answered. I don't know if that's because you just can't right now or what, but in the actual API docs for the request/answer sample, the answer sample includes the source field - no matter what I've tried, I can't get it to show up.
Page where API doc is found - https://learn.microsoft.com/en-us/rest/api/cognitiveservices/questionanswering/question-answering/get-answers#knowledgebaseanswer
Quick example snippet of how I've adapted the API:
{
"question": "<question>",
"top": 3,
"userId": "<user>",
"confidenceScoreThreshold": 0.2,
"rankerType": "Default",
"filters": {
"metadataFilter": {
"metadata": [
],
},
},
"answerSpanRequest": {
"enable": true,
"confidenceScoreThreshold": 0.2,
"topAnswersWithSpan": 1
},
},
"includeUnstructuredSources": true
}
I understand the metadata bit has nothing there, I may add something later but as of now I'm not messing with the metadata aspect in language studio sources themselves.
At any rate, the bottom line is I don't see an option to display a source, and I don't get it back in the body of the request - yet I see it in the sample response in the API doc, so what gives, am I missing something?

Unable to search for more than 20 chars in azure search

We are currently running into an issue when expanding our current azure search features.
When we have the following string indexed in azure search:
AEDE190EACWWG4VGLDE02000UHKPT
And we search for that complete string, we are not able to find it.
However, when we only use 20 chars, we are able to find it.
So the with the string below, we are able to find it
AEDE190EACWWG4VGLDE
However, when adding just 1 more char it disappears again. And this is not only within our implementation. This is also in azure itself when entering this within the query string
The field is set up as
Retrievable
Filterable
Searchable
Anyone know how to solve this issue?
I tested your scenario now, and it works fine. I cannot reproduce the problem you have. You don't specify which analyzer you use, so I'm going to assume you use the standard analyzer.
Here is how I tested.
I create a new index with two fields Id and Ordcode.
I upload two records via Postman
"value": [
{
"#search.action": "mergeOrUpload",
"Id": "1",
"Ordcode" : "AEDE190EACWWG4VGLDE02000UHKPT"
},
{
"#search.action": "mergeOrUpload",
"Id": "2",
"Ordcode": "ABC123"
}]
I search for your the string AEDE190EACWWG4VGLDE02000UHKPT using searchMode=all, queryType=full. The response is as expected.
{
"#odata.context": "https://<search-service>.search.windows.net/indexes('dg-test-65143696')/$metadata#docs(*)",
"#odata.count": 1,
"value": [
{
"#search.score": 0.2876821,
"Id": "1",
"Ordcode": "AEDE190EACWWG4VGLDE02000UHKPT"
}
]
}
I also tried to reproduce via the Search Explorer in the Azure Portal, even with simple mode and any (the default).
search=AEDE190EACWWG4VGLDE02000UHKPT&$count=true&$select=Id,Ordcode
There is a limit on the tokens produced (depending on the analyzer you use), but it's not 20 unless you have defined a shorter max token length.

Azure Spell not detecting spelling mistakes

I've written up a quick proof of concept console app to test out the functionality of the AzureSpell Cognitive Services product, however it doesn't seem to often detect obvious spelling mistakes.
Having experimented with recommendations through other SO answers, I've had limited success.
Even using the demo located at https://azure.microsoft.com/en-us/services/cognitive-services/spell-check/ produces no results.
For example, consider the following piece of text: "Currently growing my compny which is a UK based Online compny with clients across the world. Working since 2001 to help indivduals."
This produces no results. I've looked at regional settings, PROOF vs SPELL, character counts to no avail.
Has anyone had any success with this service, or, even better, does the above text snippet produce results for you?
Spell mode is working for me with your sample, see below:
The JSON result is:
{
"_type": "SpellCheck",
"flaggedTokens": [
{
"offset": 21,
"token": "compny",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "company",
"score": 0.9264452620075305
}
]
},
{
"offset": 55,
"token": "compny",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "company",
"score": 0.8740149238635179
}
]
},
{
"offset": 120,
"token": "indivduals",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "individuals",
"score": 0.753968656686115
}
]
}
]
}
Ok, so after a fair amount of trial I've had some success, which has solved some issues and created others. I've not been able to get a reliable result from Spell mode, but I have with Proof, however after adding a fairly short piece of text, it would again not report any results. Inspecting the API shows the text is encoded in the POST, removing both "%0D" and "%0A", line feed chars allows me to Proof long texts with success, which would be fine, however being UK based, lots of correct spellings are now flagged as incorrect as the PROOF mode is only available in the US. So, I've still been unable to solve getting a functioning SPELL result (which works for very short pieces of text). I understand the documentation states upto 130 chars for GET, but 10,000 chars for POST and my typical example POSTS are around 1,000 chars. Possibly a ticket with MS unless anyone has any ideas?

LUIS - understand any person name

we are building a product on LUIS / Microsoft Bot framework and one of the doubt we have is Person Name understanding. The product is set to use by anyone by just signing up to our website. Which means any company who is signing up can have any number of employees with any name obviously.
What we understood is the user entity is not able to recognize all names. We have created a phrase list but as per we know there is a limit to phrase list (10K or even if its 100K) and names in the world can never have a limit. The other way we are thinking is to not train the entity with utterances. However if we have 100s of customers with 1000s of users each, the utterances will not be a good idea in that case.
I do not see any other way of handling this situation. Probably I am missing something here? Anyone faced similar problem and how it is handled?
The worst case would be to create a separate LUIS instance for each customer but that's really a big task to do only because we cant handle names.
As you might already know, a person's name could literally be anything: e.g. an animal, car, month, or color. So, there isn't any definitive way to identify something as a name. The closest you can come is via text analysis parts of speech and either taking a guess or comparing to an existing list. LUIS or any other NLP tool is unlikely to help with this. Here's one approach that might work out better. Try something like Microsoft's Text Analytics cognitive service, with a POST to the Key Phrases endpoint, like this:
https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases
and the body:
{
"documents": [
{
"language": "en-us",
"id": "myid",
"text": "Please book a flight for John Smith at 2:30pm on Wednesday."
}
]
}
That returns:
{
"languageDetection": {
"documents": [
{
"id": "e4263091-2d54-4ab7-b660-d2b393c4a889",
"detectedLanguages": [
{
"name": "English",
"iso6391Name": "en",
"score": 1.0
}
]
}
],
"errors": []
},
"keyPhrases": {
"documents": [
{
"id": "e4263091-2d54-4ab7-b660-d2b393c4a889",
"keyPhrases": [
"John Smith",
"flight"
]
}
],
"errors": []
},
"sentiment": {
"documents": [
{
"id": "e4263091-2d54-4ab7-b660-d2b393c4a889",
"score": 0.5
}
],
"errors": []
}
}
Notice that you get "John Smith" and "flight" back as key phrases. "flight" is definitely not a name, but "John Smith" might be, giving you a better idea of what the name is. Additionally, if you have a database of customer names, you can compare the value to a customer name, either exact or soundex, to increase your confidence in the name.
Sometimes, the services don't give you an 100% answer and you have to be creative with work-arounds. Please see the Text Analytics API docs for more info.
Have asked this question to few MS guys in my local region however it seems there is no way LUIS at moment can identify names.
Its not good as being NLP, it is not able to handle such things :(
I found wit.ai (best so far) in identifying names and IBM Watson is also good upto some level. Lets see how they turn out in future but for now I switched to https://wit.ai

Resources