I would like to upload a small image and share it with a few people using MS graph API.
The Ask here is, is it possible to do this using MS Graph? or is it not possible at all? or if this is possible through any means can someone please suggest an optimal solution for this scenario
Related
I am trying to automate refunds report to google analytics 4. I can't find good documentation on importing this data using analytics management API. I came across https://www.npmjs.com/package/#google-analytics/data which seems to be good for pulling reports from GA but couldn't get a way of doing data import.
I am writing a nodejs script and was hoping someone has encountered this scenario before and could share how they accomplished it. Any help or point in the right direction will be really appreciated.
The alternative to the UA Analytics Management api is the Google Analytics Admin API for ga4
To my knowledge it doesn't support data important at this time the API is still under development it may come in the future there is no way to know.
I would suggest looking at the measurement protocol for ga4 you may be able to use that
I am using Azure Face API to tell two different persons' faces.
It was easy to use thanks to the good documentation on the MicroSoft Azure API website.
But the different confidence rate between API call and the demo on the webstie: https://azure.microsoft.com/en-us/services/cognitive-services/face/#demo
My code is simple.
First I get the face ids of two uploaded images using face detection API.
And I just send two face ids to face verify API. Then I get the result of confidence rate that means the similarity of two faces.
I always get less confidence rate from my API call than the demo of the Azure website. About 20% less.
ex) I get 0.65123 on API call while I get the higher number like 0.85121 on the demo.
This is the Azure face API specifications to verity two faces:
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/face/face/verifyfacetoface
Since I got no clue why it happens. I don't resize or crop the images on uploading.
I use the exactly same images for this test.
Is it possible for MS Azure to manipulate the values for their own interests?
I wonder if anyone has the same issue? If yes, please share your experience with me.
Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model.
'detection_02': Detection model released in 2019 May with improved accuracy compared to detection_01. When you use the Face - Detect API, you can assign the model version with the detectionModel parameter. The available values are:
detection_01
detection_02
Recently Microsoft published the Microsoft Search API (beta) which provides the possibility to index external systems by creating an MS Graph search custom connector.
I created such a connector that was successful so far. I also pushed a few items to the index and in the MS admin center, I created a result type and a vertical. Now I'm able to find the regarded external items in the SharePoint Online modern search center in a dedicated tab belonging to the search vertical created before. So far so good.
But now I wonder:
How can I achieve that the external data is continuously pushed to the MS Search Index? (How can this be implemented? Is there any tutorial or a sample project? What is the underlying architecture?)
Is there a concept of Full / Incremental / Continuous Crawls for a Search Custom Connector at all? If so, how can I "hook" into a crawl in order to update changed data to the index?
Or do I have to implement it all on my own? And if so, what would be a suitable approach?
Thank you for trying out the connector APIs. I am glad to hear that you are able to get items into the index and see the results.
Regarding your questions, the logic for determining when to push items, and your crawl strategy is something that you need to implement on your own. There is no one best strategy per se, and it will depend on your data source and the type of access you have to that data. For example, do you get notifications every time the data changes? If not, how do you determine what data has changed? If none of that is possible, you might need to do a periodic full recrawl, but you will need to consider the size of your data set for ingestion.
We will look into ways to reduce the amount of code you have to write in the future, but right now, this is something you have to implement on your own.
-James
I recently implemented incremental crawling for Graph connectors using Azure functions. I created a timer triggered function that fetches the items updated in the data source since the time of the last function run and then updates the search index with the updated items.
I also wrote a blog post around this approach considering a SharePoint list as the data source. The entire source code can be found at https://github.com/aakashbhardwaj619/function-search-connector-crawler. Hope it would be useful.
Context
I have an mobile app that provides our users with the possibility to capture the name plate of our products automatically. For this I use the Azure Cognitive Services OCR service.
I am a bit worried that customers might capture pictures of insufficient quality or of the wrong area of the product (where no name plate is). To analyse whether this is the case it would be handy to have a copy of the captured pictures so we can learn what went well or what went wrong.
Question
Is it possible to not only process an uploaded picture but to also store it in Azure Storage so that I can analyse it in a later point in time?
What I've tried so far
I configured the Diagnostic settings in a way that the logs and metrics are stored into Azure Storage. As it is called, this is only logs and metrics and not the actual image. So this does not solve my issue.
Remarks
I know that I can manually implement that in the app but I think it would be better if I have to upload
the picture only once.
I'm aware that there are data protection considerations that must be made.
No, you can't add an automatic logging based only on OCR operation, you have to implement it.
But to avoid uploading it twice as you said, you could create your logic on server side, but sending the image to your api and in the api, get the image and send it to OCR while storing it in parallel.
But I guess that based on your question, you might not have any server side things in your app?
Google API was not able to detect anything in this image