I've read through this excellent feedback on Azure Search. However, I have to be a bit more explicit in questioning one the answers to question #1 from that list...
...When you index data, it is not available for querying immediately.
...Currently there is no mechanism to control concurrent updates to the same document in an index.
Eventual consistency is fine - I perform a few updates and eventually I will see my updates on read/query.
However, no guarantee on ordering of updates is really problematic. Perhaps I'm misunderstanding Let's assume this basic scenario:
1) update index entry E.fieldX w/ foo at time 12:00:01
2) update index entry E.fieldX w/ bar at time 12:00:02
From what I gather, it's entirely possible that E.fieldX will contain "foo" after all updates have been processed?
If that is true, it seems to severely limit the applicability of this product.
Currently, Azure Search does not provide document-level optimistic concurrency, primarily because overwhelming majority of scenarios don't require it. Please vote for External Version UserVoice suggestion to help us prioritize this ask.
One way to manage data ingress concurrency today is to use Azure Search indexers. Indexers guarantee that they will process only the current version of a source document at each point of time, removing potential for races.
Ordering is unknown if you issue multiple concurrent requests, since you cannot predict in which order they'll reach the server.
If you issue indexing batches in sequence (that is, start the second batch only after you saw an ACK from the service from the first batch) you shouldn't see reordering.
Related
My previous question: Errors saving data to Google Datastore
We're running into issues writing to Datastore. Based on the previous question, we think the issue is that we're indexing a "SeenTime" attribute with YYYY-MM-DDTHH:MM:SSZ (e.g. 2021-04-29T17:42:58Z) and this is creating a hotspot (see: https://cloud.google.com/datastore/docs/best-practices#indexes).
We need to index this because we're querying the data by date and need the time for each observation in the end application. Is there a way around this issue where we can still query by date?
This answer is a bit late but:
On your previous question, before even writing a query, it feels like the main issue is "running into issues writing" (DEADLINE_EXCEEDED/UNAVAILABLE) -> it's happening on "some saves" -- so, it's not completely clear if it's due to data hot-spotting or from "ingesting more data in shorter bursts", which causes contention (see "Designing for scale").
A single entity in Datastore mode should not be updated too rapidly. If you are using Datastore mode, design your application so that it will not need to update an entity more than once per second. If you update an entity too rapidly, then your Datastore mode writes will have higher latency, timeouts, and other types of error. This is known as contention.
You would need to add a prefix to the key to index monotonically increasing timestamps (as mentioned in the best-practices doc). Then you can test your queries using GQL interface in the console. However, since you most likely want "all events", I don't think it would be possible, and so will result in hot-spotting & read-latency.
The impression is that the latency might be unavoidable. If so, then you would need to decide if it's acceptable, depending on the frequency of your query/number-of-elements returned, along with the amount of latency (performance impact).
Consider switching to Firestore Native Mode. It has a different architecture under the hood and is the next version of Datastore. While Firestore is not perfect, it can be more forgiving about hot-spotting and contention, so it's possible that you'll have fewer issues than in Datastore.
Even if you have designed your document schema with care and handcrafted minimal necessary indexes toward good balance of read vs change scenarios, it may not always be intuitive which index is actually doing the job for a heavy-RU query and if the choices are what you expected it to be. Or maybe there's a typo in indexing policy in a critical property name, causing a silent fall-back to some unsuitable index required by some other query..
I know that I can use the following tools to debug index usage in DocumentDB:
RequestCharge usage per query, but it does not say where this RU is spent on.
time/count metrics using x-ms-documentdb-populatequerymetrics header, which is useful and hints that "some" index was used, but not which one(s) were actually used.
The problem is the above toolset still forces blind experiments and working on unverifiable assumptions, causing query/index optimization to be a time-consuming process.
In SQL Server you could simply fetch the execution plan and verify index design and usage correctness. Is there a analogous tool for DocumentDB?
An illustrative pseudo-example of a query when it is not obvious which index(es) DocDB would pick:
select s.poorlySelectiveIndexed
from c
join s in c.sub
where c.anotherPoorlySelectiveIndexed = #aCommonValue
and s.Indexed1 in ('a', 'b', 'c')
and ARRAY_CONTAINS(s.Indexed2, #searchValue)
and ARRAY_CONTAINS(s.Indexed3, 'literalValue')
and (s.SuperSelective ='23456' OR c.AnotherSuperSelective = '76543')
order by s.RangeIndexed4
It seems the documentDB team is considering the already mentioned x-ms-documentdb-populatequerymetrics header and it's corresponding response as such a tool.
As mentioned in this response from "Azure Cosmos DB Team" in Azure feedback site from August 27, 2017:
We’re pleased to announce the availability of query execution statistics: https://learn.microsoft.com/en-us/azure/cosmos-db/documentdb-sql-query-metrics#query-execution-metrics
Using these metrics, you can infer the execution plan and tune the query and index for best performance tradeoffs.
Currently it does not seem to officially expose detailed information about used indexes, but let's hope it will change in some future version.
I'm trying to figure out an appropriate use case for Casandra's counter functionality. I thought of a situation and I was wondering if this would be feasible. I'm not quite sure because I'm still experimenting with Cassandra so any advice would be appreciated.
Lets say you had a small video service, you record the log of views in Cassandra while recording what video was played, which user played it, country, referer etc. You obviously want to show a count of how many times that video was played would incrementing a counter every time you insert a play event be a good solution to this? Or would there be a better alternative. Counting all the events on read every time would take a pretty big performance hit and even if you cached the results the cache would be invalidated pretty quickly if you had a busy site.
Any advice would be appreciated!
Counters can be used for whatever you need to count within an application -- both "frontend" data and "backend" one. I personally use them to store user's behaviour information (for backend analysis) and frontend ratings (each operation a user do in my platform give to the user some points). There is no real limitation on use case -- the limitation is given by few technical limitations, the bigger coming to my mind:
a counter cf can be made only by counters columns (except PK, obviously)
counters can't be reset: to set 0 value to a counter you need to read and calculate before writing (with no guarantee about the fact that someone else updated before you)
no ttl and no indexing/deletion
As far as your video service it all depends on how you choose to model data -- if you find a valid model to hit few partitions each time you write/read and you have a good key distribution I don't see any real problem in its implementation.
btw: you tagged Cassandra 2.0 but if you have to use counters you should think about 2.1 for the reasons described here
I execute batch update which modifies few rows within few column families. In case of TimedOutException some data could be modified, but possibly not whole set....
In order to implement compensating transaction, I would need to know what data (rows) was modified - is there a way to find this out? Does exception contain this information?
Thanks,
Maciej
Creating a system that can scale out means taking some trade-offs - one of these is facilitating "idempotent" operations in your application.
This means that you would either:
assume that the data was written somewhere and that the node will
eventually become consistent
fire the entire contents of the write again, perhaps sleeping a given amount of time or
at a less restrictive consistency level
A good description of this approach can be found in section 6 of Pat Helland's "Building on Quicksand" paper: http://arxiv.org/pdf/0909.1788
I have an application that works as follows: Linux machines generate 28 different types of letter to customers. The letters must be sent in .docx (Microsoft Word format). A secretary maintains MS Word templates, which are automatically used as necessary. Changing from using MS Word is not an option.
To coordinate all this, document jobs are placed into a database table and a python program running on each of the windows machines polls the database frequently, locking out jobs and running them as necessary.
We use a central database table for the job information to coordinate different states ("new", "processing", "finished", "printed")... as well to give accurate status information.
Anyway, I don't like the clients polling the database frequently, seeing as they aren't working most of the time. Clients hpoll every 5 seconds.
To avoid polling, I kind of want a broadcast "there's some work to do" or "check your database for some work to do" message sent to all the client machines.
I think some kind of publish/subscribe message queue would be up to the job, but I don't want any massive extra complexity.
Is there a zero or near zero config/maintenance piece of software that would achieve this? What are the options?
X
Is there any objective evidence that any significant load is being put on the server? If it works, I'd make sure there's really a problem to solve here.
It must be nice to have everything running so smoothly that you're looking at things that might only possibly be improved!
Is there a zero or near zero config/maintenance piece of software that would achieve this? What are the options?
Possibly, but what you would save in configuration and implementation time would likely hurt performance more than your polling service ever could. SQL Server isn't made to do a push really (not easily anyway). There are things that you could use to push data out (replication service, log shipping - icky stuff), but they would be more complex and require more resources than your simple polling service. Some options would be:
some kind of trigger which runs your executable using command-line calls (sp_cmdshell)
using a COM object which SQL Server could open and run
using a SQL Agent job to run a VBScript (which would again be considered "polling")
These options are a bit ridiculous considering what you have already done is much simpler.
If you are worried about the polling service using too many cycles or something - you can always throttle it back - polling every minute, every 10 minutes, or even just once a day might be more appropriate - this would be a business decision, so go ask someone in the business how fast it needs to be.
Simple polling services are fairly common, because they are, well... simple. In addition they are also low overhead, remotely stable, and error-tolerant. The down side is that they can hammer the database into dust if not carefully controlled.
A message queue might work well, as they're usually setup to be able to block for a while without wasting resources. But with MySQL, I don't think that's an option.
If you just want to reduce load on the DB, you could create a table with a single row: the latest job ID. Then clients just need to compare that to their last ID to see if they need to run a full poll against the real table. This way the overhead should be greatly reduced, if it's an issue now.
Unlike Postgres and SQL Server (or object stores like CouchDb), MySQL does not emit database events. However there are some coding patterns you can use to simulate this.
If you have one or more tables that you wish to monitor, you can create triggers on these tables that add a row to a "changes" table that records a queue of events to process. Your triggers filter the subset of data changes that you care about and create records in your changes table for each event you wish to perform. Because this pattern queues and persists events it works well even when the workers that process these events have outages.
You might think that MyISAM is the best choice for the changes table since it's mostly performing writes (or even MEMORY if you don't need to persist the events between database server outages). However, keep in mind that both Memory and MEMORY and MyISAM have only full-table locks so your trigger on an InnoDB table might hit a bottle neck when performing an insert into a MEMORY and MyISAM table. You may also require InnoDB for the changes table if you're using a ON DELETE CASCADE with another InnoDB table (requires both tables to use the same engine).
You might also use SHOW TABLE STATUS to check the last update time of you changes table to check if there's something to perform. This feature wont work for InnoDB tables.
These articles describes in more depth some of alternative ways to implement queues in MySQL and even avoid polling!
How to notify event listeners in MySQL
How to implement a queue in SQL
5 subtle ways you're using MySQL as a queue, and why it'll bite you