Lately I'm facing some performance issues while querying over my Azure search service index. I'm trying to figure out what happens. I came across the following article:
Azure Search performance and optimization considerations
It says:
Uploading of content to Azure Search will impact the overall performance and latency of the Azure Search service. If you expect to send data while users are performing searches, it is important to take this workload into account in your tests.
I want to clarify something. If, for example, I have two indexes on my search service account, let say: index-a, index-b.
If I upload content to index-a, it will impact the overall performance and latency of index-b?
If both indexes are within the same service, then yes, one index will have its performance affected by the other one. How much it's affected will depend on the service tier and the amount of information you are indexing.
Related
Please check this tweet chain. I am working on a PoC using Azure Cognitive Search Service and I am comparing it with AWS. AWS seems to be using MongoDB Atlas to store the indices and Search function is basically is on Mongo's default search capability which is built on Apache Lucene. I am trying to find how the inverted indices are stored behind the scenes of Azure Cognitive Search. They are using Apache Lucene which serves as the search engine to search the index.
Disclaimer
This answer should be considered accurate only as of July 2020, because implementation details do change. This information isn't material to which service is "better" for any particular purpose; just interesting for the sake of curiosity.
Also, do not take my answer to be any kind of API contract or promise of future functionality or performance. We encapsulate the storage details so that you don't have to worry about them, and also so that we have the freedom to change them if needed.
Answer
Azure Cognitive Search uses Apache Lucene under the hood, which manages the inverted indexes. As of the time of this writing, those indexes are stored on Azure virtual machine disks, which are backed by page blobs. The exact SKU of disks used depends on pricing tier and other factors; I won't get into the details here (because they do change). Those disks are attached to Azure virtual machines, which for pricing tiers other than Free map to the "search units" you pay for.
What is the maintenance required from an organization when deploying an Azure SQL Database in the long term?
My current organization is hoping to do as little database management as possible, and have looked for products that fully manage our databases without much intervention needed from our end. Some products that are being considered includes Snowflake (for their automated partitioning of tables) and Domo (for their data warehousing, connectors, and BI tool offerings).
I'm leaning towards using Azure SQL DB for multiple reasons (products offered, transparent pricing, integration ease, available documentation, SSO, etc.), but want to first understand the skills needed and ease in maintaining it in the long run.
Will we have to manually rebuild indexes and partition out tables as we scale up? Or is Azure intelligent enough that it'll do most of the heavy lifting of performance optimization itself?
Does Azure or other vendors provide services to optimize a DB?
Sorry for the vague prompts, but any additional considerations in choosing DB vendors would be great. Thanks!
Actually for your questions, you should know what is Azure SQL database and it's capabilities.
I'm leaning towards using Azure SQL DB for multiple reasons (products offered, transparent pricing, integration ease, available documentation, SSO, etc.), but want to first understand the skills needed and ease in maintaining it in the long run.
This document What is Azure SQL Database service introduced almost all message you want to know. SQL Database is a general-purpose relational database managed service in Microsoft Azure that supports structures such as relational data, JSON, spatial, and XML. SQL Database delivers dynamically scalable performance within two different purchasing models: a vCore-based purchasing model and a DTU-based purchasing model. SQL Database also provides options such as columnstore indexes for extreme analytic analysis and reporting, and in-memory OLTP for extreme transactional processing. Microsoft handles all patching and updating of the SQL code base seamlessly and abstracts away all management of the underlying infrastructure.
Will we have to manually rebuild indexes and partition out tables as we scale up? Or is Azure intelligent enough that it'll do most of the heavy lifting of performance optimization itself?
No, you don't. Scalability is one of the most important characteristics of PaaS that enables you to dynamically add more resources to your service when needed. Azure SQL Database enables you to easily change resources (CPU power, memory, IO throughput, and storage) allocated to your databases.
You can mitigate performance issues due to increased usage of your application that cannot be fixed using indexing or query rewrite methods. Adding more resources enables you to quickly react when your database hits the current resource limits and needs more power to handle the incoming workload. Azure SQL Database also enables you to scale-down the resources when they are not needed to lower the cost.
For more details, please reference: Scale Up/Down.
Does Azure or other vendors provide services to optimize a DB?
As Woblli said, Azure SQL database provides the Azure SQL database Monitoring and tuning for you.
As a complement, you also can use Azure SQL Database Automatic tuning to help you optimize the database automatically.
Hope this helps.
Azure SQL DB offers the services you're asking.
You can enable automatic tuning, which will create and drop indexes based on performance gains. Force good query plans again based on performance. It will roll back changes if the specific change has decreased the overall database performance level.
It will not partition or shard your database for you however.
Official documentation:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning
How reliable is the insertion mechanism to azure search?
Say, a call on average to upload to azure search. Are there any slas on this? average insertion time for one document, average failure rate for one document.
I'm trying to send data from my database to azure search and I was wondering if it was more reliable to send data directly to azure search, or do a dual write for example to a high available queue like kafka and read from there.
From SLA for Azure Search:
We guarantee at least 99.9% availability for index query requests when
an Azure Search Service Instance is configured with two or more
replicas, and index update requests when an Azure Search Service
Instance is configured with three or more replicas. No SLA is provided
for the Free tier.
Your client code needs to follow the best practices: batch indexing requests, retry on transient failures with an exponential back-off policy, and scale service appropriately based on the size of the documents and indexing load.
Whether or not use an intermediate buffer depends not so much on SLA, but how spiky your indexing load will be, and how decoupled you want your search indexing component to be.
You may also find Capacity planning for Azure Search useful.
We are developing a mobile app that should scale for thousands of users and we are using Azure Search as our main storage. According to Azure pricing model the query limits are set to 15 queries per second/per unit for the standard plan. With these limits and with a system that should scale with thousands of concurent users we would hit the limits pretty quickly.
In our situation is Azure Search not the right option when scaling for thousands of concurrent users?
Would DocumentDB be a better option?
Thanks!
Interesting that you're using Azure Search as your primary storage, as it's not built to be a database engine. The storage is specifically for search content (type typical pattern is to use Azure Search in conjunction with a database engine, such as SQL Database or DocumentDB, for example), using results to point back to the "system of record" content in your database.
The scale for Search is specifically for full-text-search queries your users will generate. And Azure Search scales per unit, with each unit offering 15 searches / second. So, you can scale far beyond 15/sec if you buy more search units.
However: Don't confuse this with database engine queries. You asked about DocumentDB, so using that as an example: You can query far beyond 15/second with that database engine, and that scales independently. Same goes for any VM-based database solution, SQL Database, etc - they all can scale.
This really comes down to whether you need full-text-search at high volume. If so, great - just scale Azure Search to the number of units you need, to handle your request traffic. If you can do more database-specific searches, without driving your request through Azure Search, then you don't need to scale out as much, and can take advantage of the native database query capabilities.
One thing to add to David's excellent answer - if your scenario is primarily search driven and you don't need to store data for purposes other than search and are OK with eventual consistency, then using Azure Search as the primary store may be fine.
Also, 15 requests per second query throughput of Azure Search is just a ballpark - it's neither a hard limit nor a promise. Depending on your data and query complexity, the actual throughput can be significantly (many times) higher or lower.
I have an ASP.NET MVC 2 Azure application that I am trying to switch from being single tenant to multi-tenant. I have been reviewing many blogs and posts and questions here on Stack Overflow, but am still trying to wrap my head around the specifics of what's right for this particular app.
Currently the application stores some information in a SQL Azure database, as well as some other info in an Azure Storage Account. I'm considering writing the tenant provisioning code to simply create a new database for a new tenant, along with a new azure storage account. This brings me to the following question:
How will I go about testing this approach locally? As far as I can tell, the local Azure Storage Emulator only has 1 storage account. I'm not sure if I'm able to create others locally. How will I be able to test this locally? Or will it be possible?
There are many aspects to consider with multitenancy, one of which is data architecture. You also have billing, performance, security and so forth.
Regarding data architecture, let's first explore SQL storage. You have the following options available to you: add a CustomerID (or other identifyer) that your code will use to filter records, use different schema containers for different customers (each customer has its own copy of all the database objects owned by a dedicated schema in a database), linear sharding (in which each customer has its own database) and Federation (a feature of SQL Azure that offers progressive sharding based on performance and scalability needs). All these options are valid, but have different implications on performance, scalability, security, maintenance (such as backups), cost and of course database design. I couldn't tell you which one to choose based on the information you provided; some models are easier to implement than others if you already have a code base. Generally speaking a linear shard is the simplest model and provides strong customer isolation, but perhaps the most expensive of all. A schema-based separation is not too hard, but requires a good handle on security requirements and can introduce cross-customer performance issues because this approach is not shared-nothing (for customers on the same database). Finally Federations requires the use of a customer identifyer and has a few limitations; however this technology gives you more control over performance distribution and long-term scalability (because like a linear shard, Federation uses a shared-nothing architecture).
Regarding storage accounts, using different storage accounts per customer is definitively the way to go. The primary issue you will face if you don't use separate storage accounts is performance limitations, such as the maximum number of transactions per second that can be executed using a single storage account. As you are pointing out however, testing locally may be a problem; however consider this: the local emulator does not offer 100% parity with an Azure Storage Account (some functions are not supported in the emulator). So I would only use the local emulator for initial development and troubleshooting. Any serious testing, including multitenant testing, should be done using real storage accounts. This is the only way you can fully test an application.
You should consider not creating separate databases, but instead creating different object namespaces within a single SQL database. Each tenant can have their own set of tables.
Depending on how you are using storage, you can create separate storage containers or message queues per client.
Given these constraints you should be able to test locally with the storage emulator and local SQL instance.
Please let me know if you need further explanation.