Sync mechanism to azure search - How Reliable is azure search insertion? - azure

How reliable is the insertion mechanism to azure search?
Say, a call on average to upload to azure search. Are there any slas on this? average insertion time for one document, average failure rate for one document.
I'm trying to send data from my database to azure search and I was wondering if it was more reliable to send data directly to azure search, or do a dual write for example to a high available queue like kafka and read from there.

From SLA for Azure Search:
We guarantee at least 99.9% availability for index query requests when
an Azure Search Service Instance is configured with two or more
replicas, and index update requests when an Azure Search Service
Instance is configured with three or more replicas. No SLA is provided
for the Free tier.
Your client code needs to follow the best practices: batch indexing requests, retry on transient failures with an exponential back-off policy, and scale service appropriately based on the size of the documents and indexing load.
Whether or not use an intermediate buffer depends not so much on SLA, but how spiky your indexing load will be, and how decoupled you want your search indexing component to be.
You may also find Capacity planning for Azure Search useful.

Related

Querying Azure Diagnostic table storages

We are storing our Windows/Linux VM metrics and logs into Azure diagnostics storage account for long term retention. We keep this data in Log Analytics as well but being cost conscious we keep only the minimal essential set and for 1 month. However it seems there is no way to efficiently query the Table storage data when we need it - e.g. checking historical cpu usage for a particular machine over a specific period in the past, or checking the logs captured during that period. The partition key and row key is highly convoluted with some very basic help available for the WAD tables schema while none exist for LinuxsyslogVer2v0 table schema. I was curious if anyone else using the diagnostic logs table storage for any querying/reporting? If so how do you query these for a specific host and time period? I can do a querying using non primary/row key but besides being time consuming it will cost a hell eventually considering that will be a table scan. Really appreciate any advice.
You should consider using Azure Data Explorer (ADX) for your long-term storage solution. It allows for KQL queries on your long-term data and is the preferred method for keeping log/security data past the default for services like LogA and Sentinel.
The pricing page for ADX can be a bit confusing and there is a website to help you estimate costs here: https://dataexplorer.azure.com/AzureDataExplorerCostEstimator.html
By default, logs ingested into Azure Sentinel are stored in Azure Monitor Log Analytics. This article explains how to reduce retention costs in Azure Sentinel by sending them to Azure Data Explorer for long-term retention.
Storing logs in Azure Data Explorer reduces costs while retains your ability to query your data, and is especially useful as your data grows. For example, while security data may lose value over time, you may be required to retain logs for regulatory requirements or to run periodic investigations on older data.
https://learn.microsoft.com/en-us/azure/sentinel/store-logs-in-azure-data-explorer?tabs=adx-event-hub

Is it possible to see when my Azure Resources are idling?

I want to see when my resources are idling (e.g. certain resources might only be used during business hours and not used for any other background process). I'd like to do that preferably through an API call.
It would all depends on the type of resource and what you are wanting to do. You could use the Azure Monitor API or Azure Data Explorer API with Kusto to query out specific metrics for your different services. Depending on the type of data, this would require you to have more analytics enabled.
Here are some examples based on types of services.
Azure App Service - You could query for CPU, Memory, HTTP Requests, etc. This would give you an idea of activity. These same metrics tie into the auto-scaling.
Azure VMs - CPU, Memory, Disk IO, etc. You could determine your baseline then you would know when it is idle or not.
Azure Storage - Transactions, Ingress, Egress, Requests, etc. You could use that to determine if there is activity in your storage account.
As you can see it all depends on what you want to define as idling. If the goal is to reduce costs, then that will be difficult with many of these services. You could scale up and down your App Services with some scripts or scale in/out based on metrics. Same can be done with your Azure VMs, or using stopping and starting. Storage will not be able to be adjusted, but you are only charged for storage and egress so that is dictated by activity.
Hope this helps.
no, this is not possible. how do you define "idling"? how would azure know if your service does anything or not? besides, most of the PaaS resources cannot be stopped, so whats the use of that.
You can use Azure Advisor to get cost optimization advice, or Azure Monitor directly to gather performance data and then analyze it, but its not going to be trivial.

Azure Search performance while uploading content

Lately I'm facing some performance issues while querying over my Azure search service index. I'm trying to figure out what happens. I came across the following article:
Azure Search performance and optimization considerations
It says:
Uploading of content to Azure Search will impact the overall performance and latency of the Azure Search service. If you expect to send data while users are performing searches, it is important to take this workload into account in your tests.
I want to clarify something. If, for example, I have two indexes on my search service account, let say: index-a, index-b.
If I upload content to index-a, it will impact the overall performance and latency of index-b?
If both indexes are within the same service, then yes, one index will have its performance affected by the other one. How much it's affected will depend on the service tier and the amount of information you are indexing.

What is throttled search query in azure search?

I'm using azure search for my app, and lately i'm facing performance issues.
Currently i'm investigating problem and i came across the following article:
https://learn.microsoft.com/en-us/azure/search/search-performance-optimization#scaling-azure-search-for-high-query-rates-and-throttled-requests
It says:
Scaling Azure Search for high query rates and throttled requests
When you are receiving too many throttled requests or exceed your target
latency rates from an increased query load, you can look to decrease
latency rates in one of two ways: Increase Replicas: A replica is like
a copy of your data allowing Azure Search to load balance requests
against the multiple copies. All load balancing and replication of
data across replicas is managed by Azure Search and you can alter the
number of replicas allocated for your service at any time. You can
allocate up to 12 replicas in a Standard search service and 3 replicas
in a Basic search service. Replicas can be adjusted either from the
Azure portal or PowerShell. Increase Search Tier: Azure Search comes
in a number of tiers and each of these tiers offers different levels
of performance. In some cases, you may have so many queries that the
tier you are on cannot provide sufficiently low latency rates, even
when replicas are maxed out. In this case, you may want to consider
leveraging one of the higher search tiers such as the Azure Search S3
tier that is well suited for scenarios with large numbers of documents
and extremely high query workloads.
Now i can't figure out what throttled requests means.
Google didn't help!
Azure Search starts throttling requests when the error rate (requests failing with 207 or 503 status codes) exceeds a certain threshold. The best strategy is to use an exponential retry policy on 207 and 503 responses to control the load and avoid throttling altogether.
Throttled requests have the throttle-reason response header that contains information about why the request was throttled. It appears we haven't documented that; we'll work on fixing that.

Azure Search scalability

We are developing a mobile app that should scale for thousands of users and we are using Azure Search as our main storage. According to Azure pricing model the query limits are set to 15 queries per second/per unit for the standard plan. With these limits and with a system that should scale with thousands of concurent users we would hit the limits pretty quickly.
In our situation is Azure Search not the right option when scaling for thousands of concurrent users?
Would DocumentDB be a better option?
Thanks!
Interesting that you're using Azure Search as your primary storage, as it's not built to be a database engine. The storage is specifically for search content (type typical pattern is to use Azure Search in conjunction with a database engine, such as SQL Database or DocumentDB, for example), using results to point back to the "system of record" content in your database.
The scale for Search is specifically for full-text-search queries your users will generate. And Azure Search scales per unit, with each unit offering 15 searches / second. So, you can scale far beyond 15/sec if you buy more search units.
However: Don't confuse this with database engine queries. You asked about DocumentDB, so using that as an example: You can query far beyond 15/second with that database engine, and that scales independently. Same goes for any VM-based database solution, SQL Database, etc - they all can scale.
This really comes down to whether you need full-text-search at high volume. If so, great - just scale Azure Search to the number of units you need, to handle your request traffic. If you can do more database-specific searches, without driving your request through Azure Search, then you don't need to scale out as much, and can take advantage of the native database query capabilities.
One thing to add to David's excellent answer - if your scenario is primarily search driven and you don't need to store data for purposes other than search and are OK with eventual consistency, then using Azure Search as the primary store may be fine.
Also, 15 requests per second query throughput of Azure Search is just a ballpark - it's neither a hard limit nor a promise. Depending on your data and query complexity, the actual throughput can be significantly (many times) higher or lower.

Resources