Azure search max simple fields allowed in Basic subscription level - azure

As we can see in the documentation,
Azure search-limits-quotas-capacity
The Basic level can allow 100 simple fields but we tried with 900+ fields with it, and the index is successfully created with 900+ fields in Azure. So, can someone confirm the max limit for fields in the Basic subscription level?
Also, we can see JSON request to push documents to azure have limit in terms of 32000 documents in a single request. But we can't get max size limit with it. If we go with 1000 fields in push request, then Azure has any limitation in terms of JSON request size?
Here My Azure plan

Related

How to identify per user cost and manage in azure

I am developing a website that uses Azure B2C, Azure Storage (Blobs, Tables, Queues and file share) among others. I want to restrict the user transaction of... say file uploads/Downloads with some giga bytes and then give them a message that their quota is over for this month.
Is this possible for keeping track of individual B2C customer in Azure as a website owner? what's the best approach that is available to handle this?
Thanks in Advance,
Murthy
Actually, Azure Storage don't have any feature to restrict customer's consumption.
The only way might meet your need is that using a script, whatever language azure support.
To be brief, the script's logic could be:
Create a table with customers' information.
Set the limit of every user. Write the function for automate operating usage and remaining memory, and store the usage field's value and the remaining memory field's value to the table. I use Last to present remaining memory in the table.
Compare the file size with the customer's memory remain, when the upload api requested. If the 'Last' memory have 10k more than the file to be uploaded, allow uploading, otherwise, deny the request.
If upload succeed, get the file size when customer upload/download file from storage, and stored it in the table.
The table just like this: (Just for example, you should modify with your need)

Azure Logic App configuration with continuation token

I have following setting-up for Logic app for deleting entries from Azure Storage Table. It works fine, but there is problem if in storage table is more than 1K entities. In this case were deleted only oldest 1K entities and rest remains in table ...
I found that this is caused by 1K batch limit and that there is "continuation token", which is provided in this case.
Question is how I can include this continuation into my workflow?
Thank you much for help.
So ... I dont have enough reputation points to post image - I try describe it:
Get Entities ([Table])
->
For each ([Get entities result List of Entities])
->
Delete Entity
It only return 1000 records because the Pagination default is off. So go to the Settings, set the Pagination on and set the Threshold a large enough number. I test with 2000, it will return all records.
Even in this official doc doesn't mention Azure Table, however it does have a limits, further more information about Pagination refer to this doc:Get more data, items, or records by using pagination in Azure Logic Apps.
Based on my test, we cannot get the continuationToken header with the Azure Table Storage action. This function might not be implemented for Table Storage action.
The workaround could be to use Loops action, and repeat checking for existing entities.
The continuationToken is included in some actions. For example: the Azure CosmosDB action. You can utilize it with these actions. Here is a tutorial for how to use it.

Sync mechanism to azure search - How Reliable is azure search insertion?

How reliable is the insertion mechanism to azure search?
Say, a call on average to upload to azure search. Are there any slas on this? average insertion time for one document, average failure rate for one document.
I'm trying to send data from my database to azure search and I was wondering if it was more reliable to send data directly to azure search, or do a dual write for example to a high available queue like kafka and read from there.
From SLA for Azure Search:
We guarantee at least 99.9% availability for index query requests when
an Azure Search Service Instance is configured with two or more
replicas, and index update requests when an Azure Search Service
Instance is configured with three or more replicas. No SLA is provided
for the Free tier.
Your client code needs to follow the best practices: batch indexing requests, retry on transient failures with an exponential back-off policy, and scale service appropriately based on the size of the documents and indexing load.
Whether or not use an intermediate buffer depends not so much on SLA, but how spiky your indexing load will be, and how decoupled you want your search indexing component to be.
You may also find Capacity planning for Azure Search useful.

API for metrics per Azure Site instance

In Azure's Portal you can view instance specific metrics per site if you go to a resource, select Metrics per instance (Apps), select the tab Site Metrics and then click an individual instance (starting with RD00... in the screenshot below):
I'd like to get this data (per instance, including the instance name RD00...) using some REST API call. I've looked at Azure's Resource Manager and their Metrics API, but couldn't find a way to get this data.
Is this possible, and, if so, how/where can I get this data?
I've looked at Azure's Resource Manager and their Metrics API, but couldn't find a way to get this data.
Based on the supported metrics with Azure Monitor of websites, Azure Metrics API only supports total and average type metrics for Azure Web App. We can't get per instance metrics by Azure Metrics API.
If you turn on the Web server logging in Azure portal, you will get the detail request data from /LogFiles/http/RawLogs/ folder using FTP. You could download the log and generate the metrics according the log.
Following is a record of raw logs. The ARRAffinity property will specify which instance is used to handle user request.
2017-04-27 08:51:32 AMOR-WEBAPP-TESTMSBUILD GET /home/index X-ARR-LOG-ID=bbdf4e53-3b96-4884-829c-cf82554abcc7 80 - 167.220.255.28 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/51.0.2704.79+Safari/537.36+Edge/14.14393 ARRAffinity=8f8ac6c076f7a9e2132f2eea1ff0fc61836fde1fef8c5525da0e81359003c9e8;+_ga=GA1.3.411824075.1493282866;+_gat=1 - amor-webapp-testmsbuild.azurewebsites.net 200 0 0 2607 1145 10095
ARRAffinity=8f8ac6c076f7a9e2132f2eea1ff0fc61836fde1fef8c5525da0e81359003c9e8

Azure DocumentDB Database/Collection limits

What is the limit on the amount of collections a single database on DocumentDB can have? I keep landing on this link for general DocumentDB limits, but nothing on here goes into that detail:
https://learn.microsoft.com/en-us/azure/documentdb/documentdb-limits
I may need up to 200 collections running on 1 DocumentDB database at a given time. This is to partition customer data by collection. If this is not possible then I'll have to partition across multiple databases but I can't find the information I need to figure out the proper partitioning strategy!
Also, do I get charged for empty databases, or not until the first collection is created?
There is no limit to the number of collections (no practical limit anyway), which is why it's not listed in the limits page. To provision for example, 200 collections or more, you have to contact billing support.
Empty databases are not charged in DocumentDB.

Resources