I have following setting-up for Logic app for deleting entries from Azure Storage Table. It works fine, but there is problem if in storage table is more than 1K entities. In this case were deleted only oldest 1K entities and rest remains in table ...
I found that this is caused by 1K batch limit and that there is "continuation token", which is provided in this case.
Question is how I can include this continuation into my workflow?
Thank you much for help.
So ... I dont have enough reputation points to post image - I try describe it:
Get Entities ([Table])
->
For each ([Get entities result List of Entities])
->
Delete Entity
It only return 1000 records because the Pagination default is off. So go to the Settings, set the Pagination on and set the Threshold a large enough number. I test with 2000, it will return all records.
Even in this official doc doesn't mention Azure Table, however it does have a limits, further more information about Pagination refer to this doc:Get more data, items, or records by using pagination in Azure Logic Apps.
Based on my test, we cannot get the continuationToken header with the Azure Table Storage action. This function might not be implemented for Table Storage action.
The workaround could be to use Loops action, and repeat checking for existing entities.
The continuationToken is included in some actions. For example: the Azure CosmosDB action. You can utilize it with these actions. Here is a tutorial for how to use it.
Related
I have a requirement where I have to send kusto query results to different audience on a regular interval.
My current approach is setting up a azure function which runs and shares the query results with a mail service, which distributes it to the wider audience.
I was thinking if I can leverage azure alert rules for this task. I know we can set up custom log queries for azure data explorer, but can it be run so query results on one of the database's table(in ADX) can be distributed?
You can create KUSTO queries and then use azure alerts to send out results based on the query, another way it to use logic apps which can also run KUSTO queries and then send results to whatever you need and is probably better in this case since it's not really an alert if I understand you correctly you just want to run a query and distribute the result.
Just choose which the one that suits you best and try it out and if you end up with specific issues come back and ask specific questions and we will get you going.
I'm experimenting with Logic Apps and I have the following question:
There are log entries stored in an "LogEntries" table in Azure Table Storage.
Is it possible to use a Logic App to return the 10 most recent "LogEntries" records based on the Timestamp of the record?
I found that you can turn on pagination, so I can limit the number of records it returns, but I'm not sure how to sort the records.
If so, can someone tell me which steps to use?
Thanks a lot in advance.
Azure logic app may not support sorting operations, you can refer to this article.
I need to capture all inserts/updates/deletes in Azure Table Storage for compliance purposes. How is this accomplished? I'm looking for code samples and/or documentation. I know there is Change Feed support for blobs (https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-change-feed?tabs=azure-portal), which is still in preview. Anything similar for tables?
Table storage does not provide any change feed or similar. If you need that, you could switch to "Premium Tables" which is basically Table API on Cosmos DB - which does provide things like change feed. Of course, this does come at a higher price point.
https://learn.microsoft.com/en-us/azure/cosmos-db/table-introduction
If you're desperate you can try Azure Storage analytics logging. Important caveat:
Requests are logged on a best-effort basis. This means that most requests will result in a log record, but the completeness and timeliness of Storage Analytics logs are not guaranteed.
As such, it doesn't solve your compliance problem, but it might help someone else.
As we can see in the documentation,
Azure search-limits-quotas-capacity
The Basic level can allow 100 simple fields but we tried with 900+ fields with it, and the index is successfully created with 900+ fields in Azure. So, can someone confirm the max limit for fields in the Basic subscription level?
Also, we can see JSON request to push documents to azure have limit in terms of 32000 documents in a single request. But we can't get max size limit with it. If we go with 1000 fields in push request, then Azure has any limitation in terms of JSON request size?
Here My Azure plan
Basically I have a storage account with a containers that contain blobs of unhandled errors. My task is to somehow generate a metric that will be able to show how many blobs were uploaded to that container every hour. I tried using the Azure built in metrics, but it seems like that might limit me to the entire storage account and not just one container. I did some research on Power BI and thought that might be a good place to start, but again I came up empty.
If anyone has a good starting place for me, that would be incredible. I'm assuming that this will end up being something that requires some SQL queries, or perhaps something I can do programatically in Visual Studio. Apologies if this was posted in the wrong place, but it seemed like the best fit from my opinion.
Thanks!
You should take a look at Azure Event Grid with Blob Storage Integration. In short, whenever a blob is created, an event will be raised by Azure Event Grid. You can consume this event and post the event data to an HTTP endpoint (or call an Azure Function) which can save this information about this event in some persistent storage (Azure Tables for example). You can then create reports by querying this data.
For more information about this, you may find this link helpful: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-event-overview.