I have been using the 2022/06/30-preview version of the API to OCR-ize docx and powerpoint documents.
Now that the API has been stabilized and has moved to 2022-08-31, I have updated my code to use this stable version (juste a version update of the sdk client), but the same documents are now rejected, with an error InvalidContent, "The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats.".
Has support for Office documents been dropped or is there some settings to add ? From the changelog I don't seem to see any mention that support has been dropped between the last preview version and the stable one.
I'm using the node.js SDK. I have checked that the same docx document, using the same exact code, is accepted using the #azure/ai-form-recognizer#4.0.0-beta.5 SDK client, but not the latest and stable #azure/ai-form-recognizer#4.0.0 version. The code I'm using is almost exactly the example code in the quickstart, only the urls change.
Well according to this MSDOC they have dropped support for Microsoft office files for all SDK.
So, you have two options the form recognizer does provide support but for Microsoft office files through RestAPi. So, you can either make http calls or you can convert the files to pdf and then use conventional SDK for further processing.
The conversion is done using docx-pdf npm package. Here I have a hjh.docx which I am converting to pdfuploader.pdf and then processing it.
const fs = require("fs");
const { AzureKeyCredential, DocumentAnalysisClient } = require("#azure/ai-form-recognizer");
const key= "";
const endpoint = "";
async function main() {
//convertion logic
var docxConverter = require('docx-pdf');
docxConverter('./hjh.docx','./pdfuploader.pdf',function(err,result){
if(err){
console.log(err);
}
console.log('result'+result);
});
// form recognizer logic
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
const readStream = fs.createReadStream("<Path>");
const poller = await client.beginAnalyzeDocument("prebuilt-document", readStream,{
onProgress: ({ status }) => {
console.log(`status: ${status}`);
},
});
const e = await poller.pollUntilDone();
console.log(e);
}
main().catch((error) => {
console.error("An error occurred:", error);
process.exit(1);
});
#azure/ai-form-recognizer output:
#azure/ai-form-recognizer#4.0.0-beta.5 output:
Related
I'm working on building a snippet manager app and through the interface you can create new snippets and edit them using a code editor but what I'm stuck at is how can I send the snippet code to my server using POST for it to create a new file for that snippet.
For ex. -
const getUser = async (name) => {
let response = await fetch(`https://api.github.com/users/${name}`);
let data = await response.json()
return data;
}
One solution that I can think of is to parse the code into JSON equivalent that'll contain all the tokens in JSON format but for that I'll have to add parsers for every language and select a parser based on what language the user selected. I'm trying to figure out a way to avoid having to add all the parsers unless there isnt any solution for this.
Another solution I can think of is to generate the file from the frontend and send that file through POST request.
My current stack is Node+React
Using the second solution is working for me right now. I've written the code below for it -
app.post("/create", isFileAttached, function(req, res) {
const { file } = req.files;
const saveLocation = `${saveTo}/${file.mimetype.split("/")[1]}`;
const savePath = `${saveLocation}/${file.name}`;
if (!fs.existsSync(saveLocation)) {
fs.mkdirSync(saveLocation, { recursive: true });
}
fs.writeFile(savePath, file.data.toString(), err => {
if (err) throw err;
res.status(200).send({ message: "The file has been saved!" });
});
});
With this solution I no longer have to add any parsers, since whatever's written in the files are no longer a concern anymore.
Context
I am working on a Proof of Concept for an accounting bot. Part of the solution is the processing of receipts. User makes picture of receipt, bot asks some questions about it and stores it in the accounting solution.
Approach
I am using the BotFramework nodejs example 15.handling attachments that loads the attachment into an arraybuffer and stores it on the local filesystem. Ready to be picked up and send to the accounting software's api.
async function handleReceipts(attachments) {
const attachment = attachments[0];
const url = attachment.contentUrl;
const localFileName = path.join(__dirname, attachment.name);
try {
const response = await axios.get(url, { responseType: 'arraybuffer' });
if (response.headers['content-type'] === 'application/json') {
response.data = JSON.parse(response.data, (key, value) => {
return value && value.type === 'Buffer' ? Buffer.from(value.data) : value;
});
}
fs.writeFile(localFileName, response.data, (fsError) => {
if (fsError) {
throw fsError;
}
});
} catch (error) {
console.error(error);
return undefined;
}
return (`success`);
}
Running locally it all works like a charm (also thanks to mdrichardson - MSFT). Stored on Azure, I get
There was an error sending this message to your bot: HTTP status code InternalServerError
I narrowed the problem down to the second part of the code. The part that write to the local filesystem (fs.writefile). Small files and big files result in the same error on Azure.fs.writefile seams unable to find the file
What is happpening according to stream logs:
Attachment uploaded by user is saved on Azure
{ contentType: 'image/png',contentUrl:
'https://webchat.botframework.com/attachments//0000004/0/25753007.png?t=< a very long string>',name: 'fromClient::25753007.png' }
localFilename (the destination of the attachment) resolves into
localFileName: D:\home\site\wwwroot\dialogs\fromClient::25753007.png
Axios loads the attachment into an arraybuffer. Its response:
response.headers.content-type: image/png
This is interesting because locally it is 'application/octet-stream'
fs throws an error:
fsError: Error: ENOENT: no such file or directory, open 'D:\home\site\wwwroot\dialogs\fromClient::25753007.png
Some assistance really appreciated.
Removing ::fromClient prefix from attachment.name solved it. As #Sandeep mentioned in the comments, the special characters where probably the issue. Not sure what its purpose is. Will mention it in the Botframework sample library github repository.
[update] team will fix this. Was caused by directline service.
I've got a Dialogflow agent for which I'm using the Inline Editor (powered by Cloud Functions for Firebase). When I try to get external api data by using request-promise-native I keep getting Ignoring exception from a finished function in my firebase console.
function video(agent) {
agent.add(`You are now being handled by the productivity intent`);
const url = "https://reqres.in/api/users?page=2";
return request.get(url)
.then(jsonBody => {
var body = JSON.parse(jsonBody);
agent.add(body.data[0].first_name)
return Promise.resolve(agent);
});
}
Your code looks correct. The exception in this case might be that you're not using a paid account, so network access outside Google is blocked. You can probably see the exact exception by adding a catch block:
function video(agent) {
agent.add(`You are now being handled by the productivity intent`);
const url = "https://reqres.in/api/users?page=2";
return request.get(url)
.then(jsonBody => {
var body = JSON.parse(jsonBody);
agent.add(body.data[0].first_name)
return Promise.resolve(agent);
})
.catch(err => {
console.error('Problem making network call', err);
agent.add('Unable to get result');
return Promise.resolve(agent);
});
}
(If you do this, you may want to update your question with the exact error from the logs.)
Inline Editor uses Firebase. If you do not have a paid account with Firebase, you will not be able to access external APIs.
I'm using watson-developer-cloud with nodejs and trying to delete more than intent with the following:
let IntentName = req.body.intentName;
var params = {
workspace_id: workspaceId,
intent: // delete more than on intent here
};
conversation.deleteIntent(params, function(err, response) {
if (err) {
console.error(err);
} else {
console.log(JSON.stringify(response, null, 2));
}
});
how can i delete more than one?
One option you can do is download the whole workspace and work on the JSON object directly. Then when completed, send the whole updated block back to your workspace in one go.
This means less calls, lowering your chances of a rate limit kicking in.
The SDK is based on the API for Watson Assistant. The API supports deletion of one intent per call. So you would need to loop over all the intents you want to delete and remove them one by one.
I am trying to pass the Microsoft Cognitive services facial API an image which the user has uploaded. The image is available on the server in the uploads folder.
Microsoft is expecting the image to be 'application/octet-stream' and passed as binary data.
I am currently unable to find a way to pass the image to the API that is satisfactory for it to be accepted and keep receiving "decoding error, image format unsupported". As far as im aware the image must be uploaded in blob or file format but being new to NodeJs im really unsure on how to achieve this.
So far i have this and have looked a few options but none have worked, the other options i tried returned simmilar errors such as 'file too small or large' but when ive manually tested the same image via Postman it works fine.
image.mv('./uploads/' + req.files.image.name , function(err) {
if (err)
return res.status(500).send(err);
});
var encodedImage = new Buffer(req.files.image.data, 'binary').toString('hex');
let addAPersonFace = cognitive.addAPersonFace(personGroupId, personId, encodedImage);
addAPersonFace.then(function(data) {
res.render('pages/persons/face', { data: data, personGroupId : req.params.persongroupid, personId : req.params.personid} );
})
The package it looks like you're using, cognitive-services, does not appear to support file uploads. You might choose to raise an issue on the GitHub page.
Alternative NPM packages do exist, though, if that's an option. With project-oxford, you would do something like the following:
var oxford = require('project-oxford'),
client = new oxford.Client(YOUR_FACE_API_KEY),
uuid = require('uuid');
var personGroupId = uuid.v4();
var personGroupName = 'my-person-group-name';
var personName = 'my-person-name';
var facePath = './images/face.jpg';
// Skip the person-group creation if you already have one
console.log(JSON.stringify({personGroupId: personGroupId}));
client.face.personGroup.create(personGroupId, personGroupName, '')
.then(function(createPersonGroupResponse) {
// Skip the person creation if you already have one
client.face.person.create(personGroupId, personName)
.then(function(createPersonResponse) {
console.log(JSON.stringify(createPersonResponse))
personId = createPersonResponse.personId;
// Associate an image to the person
client.face.person.addFace(personGroupId, personId, {path: facePath})
.then(function (addFaceResponse) {
console.log(JSON.stringify(addFaceResponse));
})
})
});
Please update to version 0.2.0, this should work now.