Does Azure API Management supports streaming e.g. sending 10 GB payloads? It would be a REST API returning a JSON array.
The payload is streamed by default unless you buffet it explicitly by using either policy expression (context.Request.Body, context.Response.Body), transformation policies that manipulate the body like json-to-xml or the content validation policies like the validate-content policy
Related
I have larger files it is taking too much time when I give base64 or even bytes. I am looking for an option where I can directly give bloburl to docusign envelope. If there is any option please help
Yes, you can upload documents in binary format and from cloud providers.
Cloud providers
DocuSign can load directly from
Google Drive
Dropbox
Box
OneDrive (Personal and Business)
Documentation. To use this feature as a developer, first set it up using the DocuSign web app. Then use DocuSign's API logging to see how the API calls reference the files on the cloud servers.
Also see the docs for the Document.remoteUrl attribute.
Use multi-part mime format with the regular Envelopes:create call
Multi-part mime enables you to include one or more documents in binary format. Docs. Example using Node.js. Examples for other languages are also available as part of the QuickStart code examples.
Use Chunked Uploads
The Chunked Uploads API resource enables you to upload a document in binary format, in multiple parts if needed, then use the resulting reference URL when creating an envelope.
The API Request Builder uses chunked uploads. So you can monitor the API via API logging while using the API Request Builder to see what it is doing.
In any case, remember that there's a maximum request size for any API call to DocuSign of 34MB. There are other limits too. Discuss with your DocuSign contacts if you have any issues with limits.
How do I send a JSON payload to a Zabbix Trapper Item without using Zabbix Sender?
I see the documentation on how to format the sender request and I see the documentation related to the header, but I haven't found how to use the header with a JSON payload.
My goal is to send Azure Activity Logs to Zabbix a Trapper Item using Azure Alerts.
I don't want to setup a Script Item or equivalent that would pull the information from the Azure API, as I would then have to worry about hitting the query limit for the Azure Management APIs and being throttled by the Azure platform.
How is the header incorporated into the request payload?
You need to use an implementation of the same protocol, for example:
in go: https://github.com/adubkov/go-zabbix
in python: https://github.com/adubkov/py-zabbix
and put that in a cloud function to send the payload to Zabbix.
I am trying to use Azure Logic Apps to read data from one of the Salesforce objects which has huge number of records. The Salesforce connector fetches the record and returns the pagination link as #odata.nextLink. But when I try to use a JSON parser to read the value it’s coming as null.
I tried to access the nextlink in a browser but it requires authentication. What authentication do we pass here ?
I would like to use an until action to iterate till I get a next link . So how do I check the condition for the until loop ?
nextLink doesn't look like core Salesforce thing, it might be the OData connector preprocessing the results for you. You'd have to consult the documentation for the connector (if any).
Salesforce's REST API will return field with nextRecordsUrl if there's next page of results, you'd call that in loop until the field disappears. You'd call it like any other REST API resource available after login, by passing Authorization: Bearer <sessionId also known as accessToken here>. Again - probably the connector abstracts this away from you. Don't think you can send headers like that in browser, you'd need curl, Postman, SoapUI or similar http client.
If you don't get better answer and documentation is scarce - consider using raw REST API. Or Azure Data Factory has an almost-decent Salesforce connector?
https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query.htm
I can see there are two versions of REST API endpoints for Speech to Text in the Microsoft documentation links.
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/batch-transcription and https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-speech-to-text
One endpoint is [https://.api.cognitive.microsoft.com/sts/v1.0/issueToken] referring to version 1.0 and another one is [api/speechtotext/v2.0/transcriptions] referring to version 2.0. How can I create a speech-to-text service in Azure Portal for the latter one?
Whenever I create a service in different regions, it always creates for speech to text v1.0.
Any tips?
PS: I've Visual Studio Enterprise account with monthly allowance and I am creating a subscription (s0) (paid) service rather than free (trial) (f0) service.
Thanks,
Ozgur
All official Microsoft Speech resource created in Azure Portal is valid for Microsoft Speech 2.0
I understand that this v1.0 in the token url is surprising, but this token API is not part of Speech API.
So go to Azure Portal, create a Speech resource, and you're done.
If you want to be sure, go to your created resource, copy your key. That's what you will use for Authorization, in a header called Ocp-Apim-Subscription-Key header, as explained here
Demo:
Get your key on your created resource
Go to https://[REGION].cris.ai/swagger/ui/index (REGION being the region where you created your speech resource)
Click on Authorize: you will see both forms of Authorization
Paste your key in the 1st one (subscription_Key), validate
Close this window
Test one of the endpoints, for example the one listing the speech endpoints, by going to the GET operation on /api/speechtotext/v2.0/endpoints
Click 'Try it out' and you will get a 200 OK reply!
Understand your confusion because MS document for this is ambiguous. Per my research,let me clarify it as below: Two type services for Speech-To-Text exist, v1 and v2.
v1 could be found under Cognitive Service structure when you create it:
Based on statements in the Speech-to-text REST API document:
Before using the speech-to-text REST API, understand:
Requests that use the REST API and transmit audio directly can only
contain up to 60 seconds of audio.
The speech-to-text REST API only returns final results. Partial
results are not provided.
If sending longer audio is a requirement for your application, consider using the Speech SDK or a file-based REST API, like batch
transcription.
So v1 has some limitation for file formats or audio size. If you have further more requirement,please navigate to v2 api- Batch Transcription hosted by Zoom Media.You could figure it out if you read this document from ZM. You could create that Speech Api in Azure Marketplace:
That's the creation page for it :
Also,you could view the API document at the foot of above page, it's V2 API document.
Final tip:
v1's endpoint like: https://eastus.api.cognitive.microsoft.com/sts/v1.0/issuetoken
v2's endpoint like:
In my application im trying to get the metrics of azure storage i.e metrics for each of its services(blob,queue,file,table) but before trying to get the metrics i want to check the availability of the storage ,can i do that please suggest.
i want to check if the storage status if its up or down before collecting metrics via SDK
According to your description, I suggest you could send the request to management.azure.com to get your storage account current status by using rest api.
The request is as below:
Get Method:
https://management.azure.com/subscriptions/{yoursubscriptionsID}/resourceGroups/{resourceGroupname}/providers/Microsoft.Storage/storageAccounts/{Youraccount name}?api-version=2017-06-01
Request header:
authorization: Bearer {accesstoken}
More details about how to get the access token(Register your client application with Azure AD, Acquire an access token), you could refer to this article.
Result:
Notice: it will return a json which contains all your storage account current message.
In this json message it contains the statusofPrimary property, this is the status.
Result:
One way of doing it is to invoke the Metadata Rest API's for each of those storage services (blob, queue, table and file). You can go through below links to see the REST API specification that returns the metadata.
Storage - https://learn.microsoft.com/en-us/rest/api/storageservices/setting-and-retrieving-properties-and-metadata-for-blob-resources
Queue - https://learn.microsoft.com/en-us/rest/api/storageservices/get-queue-metadata
Table - https://learn.microsoft.com/en-us/rest/api/storageservices/get-table-service-properties
File - https://learn.microsoft.com/en-us/rest/api/storageservices/get-directory-metadata