Azure Logic App - Twitter Connector Issues - azure

I have a Logic App with Twitter connector and a Dropbox connector. The latter has repeater, which loops over the Twitter body and upload a text file in each iteration with Tweet_ID as file name. The Dropbox connector many times returns conflict errors, it seems Tweet connector keeps returning same tweets again and again, which had been already processed, which results in duplicate file names.
When I look at the output of the Dropbox connector, below is the body it returns.
"body": {
"status": 409,
"source": "api-content.dropbox.com",
"message": "conflict_file"
}

You have probably seen this page https://azure.microsoft.com/sv-se/documentation/articles/app-service-logic-use-logic-app-features/ where they show how to do this.
Have you checked that you don't supply the same Tweet_ID several times? The logic app json format it a bit tricky right now, with not so much documentation.
/dag

You are right. The twitter connector doesn't "remember" the tweets that are returned from a search. It will return the same again. (Just to be clear. We are discussing the Twitter Connector Action Search Tweets.)

Related

Is there a way to intercept the query in nlp.js WebChat API server?

Just getting started with nlp.js, and I'd like to be able to test out some ideas with their Express API server package.
As far as I can tell, there's no way to "intervene" in the QnA bot exchange. For instance, if I'd like to format the output to contain the user's name or a time or whatever.
Say my corpus was a tsv file with:
some question \t welcome, #name
And I wanted to swap out that #name tag? Right now, I just get that string exactly as is.
In the conf.json:
"api-server": {
"port": 3000,
"serveBot": true
}
Maybe there's a pipeline logic to do that?
Can't seem to find a lot of reference material on available events in the pipeline or how to intercede in the WebChat flow out there.

How to get file comments using Sharepoint rest api

I'm using the following api to successfully get file data:
https://acme.sharepoint.com/sites/my-site/_api/Web/Lists(guid'xxx')/files('yyy')
This is a docx file on which I've posted comments using the web console.
How can I fetch these comments using the rest api? I tried appending /comments to the url, but I'm getting the following 404 error:
{
"error": {
"code": "-1, Microsoft.SharePoint.Client.ResourceNotFoundException",
"message": {
"lang": "en-US",
"value": "Cannot find resource for the request Comments."
}
}
}
The Comments() endpoint currently exists only under the Items() endpoint and not under the Files() endpoint.
Basically, you can access the Comments() functionality only under the below endpoint:
GET https://{site_url}/_api/web/lists/GetByTitle({list_title})/items({item_id})/Comments
You can easily test the above in a PowerAutomate scenario with a Send Http Request to SharePoint actions.
In the below example I attempt to target the file in the document library:
On the other hand, if I attempt to target the file based on the List Item Id that it got in the document library I will get the below response:
As you can see from the above, I am also able to target a specific comment that I left.
Please take note of the below
The Comments() endpoint is not available for MS resources, meaning docx, excels and such files. It is only available for non-MS resource files like pdfs, txts and so on. I am not sure why this rule is in effect but, my best guess would be because there is a "commenting" functionality provided within a Word Document, for example.
You could find a bit more info about the above here.

Azure search for user, groups, or service principle by name or email address via Rest API or python module (MS Graph)?

In azure when adding a user, group, or service principle to a role you can search by name and email address in the same search (screenshot below). When I look at the MS Graph APIs there are separate APIs for users, groups, and service principles (MS Graph API documentation links below). And it looks like the search options cannot be mixed (just display name or just email).
Does anyone know how to achieve a search like this using an Azure REST API. I'm curious if anyone knows what calls Azure is actually doing and if they are part of the published rest API? Or if one search is combining like multiple API calls which would be confusing since are paginated that would be hard to figure out what to display from which....
I'm building an app to add permissions and I'm trying to recreate a feature like this search.
Only way I can think of to achieve this now would be to select an option to search for either 'groups', 'users', or 'service principles'. Then another options to select search by 'email' or search by 'displayName' (but not both as the same search). This seems more clunky but technically ok.... but I'd rather do it like the azure screenshot below.
https://learn.microsoft.com/en-us/graph/api/user-list?view=graph-rest-1.0&tabs=http
https://learn.microsoft.com/en-us/graph/api/group-list?view=graph-rest-1.0&tabs=http
https://learn.microsoft.com/en-us/graph/api/serviceprincipal-list?view=graph-rest-1.0&tabs=http
Microsoft Graph API provides batching functionality where you can batch multiple requests together and send them as a single request for processing. In your case, on the server-side (Graph API side) three requests will be processed but from your application you will be sending a single request and get a single response.
Your request would be something like:
{
"requests": [
{
"id": "1",
"method": "GET",
"url": "/users?$filter=<your-filter-criteria>"
},
{
"id": "2",
"method": "GET",
"url": "/groups?$filter=<your-filter-criteria>"
},
{
"id": "3",
"method": "GET",
"url": "/servicePrincipals?$filter=<your-filter-criteria>"
}
]
}
You can learn more about the batching capability in Microsoft Graph API here: https://learn.microsoft.com/en-us/graph/json-batching.
So the batching answer was not exactly what I wanted but something really cool that I'll likely end up using in the future!
Since batching could get ugly with the paging results since say I want 30 total results (so i set paging to 10,10,10) then if one gives 10 and has nextlink and the next gives 0 and the last give 5. I now have 15 results to display but paging with the other and the sorting of results after the next could throw results earlier. It might just be weird and I don't have time to think it through.
I just ended up doing a drop down for users, groups, and service principals. And you have to search separate. Not as cool as how MS does it internally but its consistent, predictable, and works.

Can Azure batch transcription results be directed to a non-public url

I would like to use the Microsoft Azure Cognitive Service Speech-to-text. It offers a REST API, which I succesfully have used. I can point to an Azure blob storage using a SAS URI, and the files in the container are transcribed.
My problem is, that when I try to retrieve the transcription results from the API, they are published to a public url. Since voice data can be sensitive, I would like to keep the results stored privately. Is there any way to do this?
I does not seem like it is an option in the API schema, although you can set a destinationContainerUrl. I have tried to set the destinationContainerUrl, but the result does not appear in the container.
I have only used the API reference, which is why I am not posting any code.
You've found the correct option. Using destinationContainerUrl will write the results into this container. Make sure you provide a container SAS which allows listing and writing.
When the job succeeds, the results should be there. Please check that status of your job, maybe it was set to failed.
Documentation about transcriptions:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/batch-transcription
If the job succeeds and the files are not on this container, please let us know the $.self link of the job and the creation time, to help us gathering the logs.
Ok. So the solution was super simple. I just did the post request json wrong. destinationContainerUrl needs to be under properties, as shown below:
{"contentUrls": ["LINK-TO-BLOB-SAS-URI-TOKEN"],
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked",
"destinationContainerUrl": "LINK-TO-BLOB-SAS-URI-TOKEN"
},
"locale": "en-US",
"displayName": "Transcription from blob to blob"
}

Recieve zip file from Box View API

i'm using the Box View API to convert a PDF file to HTML, i'm using the /documents/{id}/content.{extension} section.
The response for this GET call is a .zip file, however i don't know how to retrive it and make downloadable.
Also note that i'm using node.js.
Thanks for your help
You can set your own webhook URL that will be called by Box when your document status changes (one POST on your webhook for "document.viewable", and one for "document.done" plus one "document.error" if an transformation error occured).
Just listen to the "document.done" status and download the assets then. Format that is posted to the webhook you have set looks like :
[{
"type": "document.done",
"data": {
"id": "4cca28f1159c4f368193d5014fabc16e"
},
"triggered_at": "2014-01-30T20:33:04.798Z"
}]
Beware of the docs and check the format programatically. Their API docs are often no quite correct and they post multiple webhooks at the time i'm writing (which is a bug i've reported).
For more info and Box View API docs

Resources