Difference between Search and Incremental Exports? - zendesk-api

Incremental Exports:
I query at a certain time for all records which have changed since a specific time. I would personally use the max updated_at timestamp in my database.
https://developer.zendesk.com/rest_api/docs/support/incremental_export
Search:
I can query a certain endpoint/table for all tickets which have been updated since the max updated_at timestamp in my database.
https://developer.zendesk.com/rest_api/docs/support/search
path='/api/v2/search.json', query='query=type:ticket updated>=2019-06-10T00:00:00Z'
It seems like both of these methods achieve the same goal, but I want to be certain that I choose the right one and that there are no caveats or issues I will run into later.
I assume that if I keep track of the max update timestamp that I have already retrieved, then I can always pull new/changed records >= from that timestamp (and only have minor duplication that I need to address from records with the exact same timestamp). Any suggestions?

The main difference between Search and Incremental Export is Query.
But according to you start_time in Incremental Export and country=US&toll_free=true in Search both are same but actually NOT.
Because query is checking the availability of something, which is actually performing by any search query but in start_time that is not query because you are just pointing a starting point you have no concern what the values is coming. But in query you have concerns, because you want "That is the value which I want"
Hope so you are getting my point and clearing your doubt.

Related

Best way to Fetch N rows in ScyllaDB

I have a use case in which I utilize ScyllaDB to limit users' actions in the past 24h. Let's say the user is only allowed to make an order 3 times in the last 24h. I am using ScyllaDB's ttl and making a count on the number of records in the table to achieve this. I am also using https://github.com/spaolacci/murmur3 to get the hash for the partition key.
However, I would like to know what is the most efficient way to query the table. So I have a few queries in which I'd like to understand better and compare the behavior(please correct me if any of my statement is wrong):
using count()
count() will implement a full-scan query, meaning that it may query more than necessary records into the table.
SELECT COUNT(1) FROM orders WHERE hash_id=? AND user_id=?;
using limit
limit will only limit the number of records being returned to the client. Meaning it will still query all records that match its predicates but only limit the ones returned.
SELECT user_id FROM orders WHERE hash_id=? AND user_id=? LIMIT ?;
using paging
I'm a bit new to this, but if I read the docs correctly it should only query the up until it received the first N records without having to query the whole table. So if I limit the page size to a number of records I want to fetch and only query the first page, would it work correctly? and will it have a consistent result?
docs: https://java-driver.docs.scylladb.com/stable/manual/core/paging/index.html
my query is still using limit, but utilizing the driver to achieve this with https://github.com/gocql/gocql
iter := conn.Query(
"SELECT user_id FROM orders WHERE hash_id=? AND user_id=? LIMIT ?",
hashID,
userID,3
).PageSize(3).PageState(nil).Iter()
Please let me know if my analysis was correct and which method would be best to choose
Your client should always use paging - otherwise you risk adding pressure to the query coordinator, which may introduce latency and memory fragmentation. If you use the Scylla Monitoring stack (and you should if you don't!), refer to the CQL Optimization dashboard and - more specifically - to the Paged Queries panel.
Now, to your question. It seems to be that your example is a bit minimalist for what you are actually wanting to achieve and - even then - should it not be, we have to consider such set-up at scale. Eg: There may be a tenant allowed which is allowed to place 3 orders within a day, but another tenant allowed to place 1 million orders within a week?
If the above assumption is correct - and with the options at hand you have given - you are better off using LIMIT with paging. The reason is because there are some particular problems with the description you've given at hand:
First, you want to retrieve N amount of records within a particular time-frame, but your queries don't specify such time-frame
Second, either COUNT or LIMIT will initiate a partition scan, and it is not clear how a hash_id + user_id combination can be done to determine the number of records within a time-frame.
Of course, it may be that I am wrong, but I'd like to suggest different some approaches which may be or not applicable for you and your use case.
Consider a timestamp component part of the clustering key. This will allow you to avoid full partition scans, with queries such as:
SELECT something FROM orders WHERE hash_id=? AND user_id=? AND ts >= ? AND ts < ?;
If the above is not applicable, then perhaps a Counter Table would suffice your needs? You could simply increment a counter after an order is placed, and - after - query the counter table as in:
SELECT count FROM counter_table WHERE hash_id=? AND user_id=? AND date=?;
I hope that helps!
I have a few points I want to add to what Felipe wrote already:
First, you don't need to hash the partition key yourself. You can use anything you want for the partition key, even consecutive numbers, the partition key doesn't need to be random-looking. Scylla will internally hash the partition key on its own to improve the load balancing. You don't need to know or care which hashing algorithm ScyllaDB uses, but interestingly, it's a variant of murmur3 too (which is not identical to the one you used - it's a modified algorithm originally picked by the Cassandra developers).
Second, you should know - and decide whether you care - that the limit you are trying to enforce is not a hard limit when faced with concurrent operations: Imagine that the given partition already has two records - and now two concurrent record addition requests come in. Both can check that there are just two records, decide it's fine to add the third - and then when both add their record - and you end up with four records. You'll need to decide whether this is fine for you that a user can get in 4 requests in a day if they are lucky, or it's a disaster. Note that theoretically you can get even more than 4 - if the user managest to send N requests at exactly the same time, they may be able to get 2+N records in the database (but in the usual case, they won't manage to get many superflous records). If you'll want 3 to be a hard limit, you'll probably needs to change your solution - perhaps to one based on LWT and not use TTL.
Third, I want to note that there is not an important performance difference between COUNT and LIMIT when you know a-priori that there will only be up to 3 (or perhaps, as explained above, 4 or some other similarly small number) results. If you assume that the SELECT only yields three or less results, and it can never be a thousand results, then it doesn't really matter if you just retrieve them or count them - you should just do whichever is convenient for you. In any case, I think that paging is not a good solution your need. For such short results and you can just use the default page size and you'll never reach it anyway, and also paging hints the server that you will likely continue reading on the next page - and it caches the buffers it needs to do that - while in this case you know that you'll never continue after the first three results. So in short, don't use any special paging setup here - just use the default page size (which is 1MB) and it will never be reached anyway.

Querying on column which is not a part of a PK or a secondary index

Please help me to resolve a confusion. Cassandra book Claims that attempts to query based on column that is not a part of a PK should fail (No secondary index for this column as well). However when I try to do it I can see this warning:
Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING
Once I append ALLOW FILTERING to my query, there is no more error. I understand the implication on performance - however there is a clear contradiction to what is written in the book. Was this feature added later or book authors simply missed this?
I think it is great you have a textbook to guide you through important noSQL concepts, but don't rely on it as CASSANDRA is open source and is constantly updated by the community. Online resources such as the official apache documentation is a much better option to retrieve updated information / tutorials on new and existing features.
Although ALLOW FILTERING does exist, it is still recommended to use a different table construction (e.g. changing the column to a key) or create an INDEX to keep querying fast.
AFAIK, Cassandra has ALLOW FILTERING from version 1.
Also to explain ALLOW FILTERING,
As per the datastax documentation,
Let’s take for example the following table:
CREATE TABLE blogs (blogId int,
time1 int,
time2 int,
author text,
content text,
PRIMARY KEY(blogId, time1, time2));
If you execute the following query:
SELECT * FROM blogs;
Cassandra will return you all the data that the table blogs contains.
If you now want only the data at a specified time1, you will naturally add an equal condition on the column time1:
SELECT * FROM blogs WHERE time1 = 1418306451235;
In response, you will receive the following error message:
Bad Request: Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING.
Cassandra knows that it might not be able to execute the query in an efficient way. It is therefore warning you: “Be careful. Executing this query as such might not be a good idea as it can use a lot of your computing resources”.
The only way Cassandra can execute this query is by retrieving all the rows from the table blogs and then by filtering out the ones which do not have the requested value for the time1 column.
If your table contains for example a 1 million rows and 95% of them have the requested value for the time1 column, the query will still be relatively efficient and you should use ALLOW FILTERING.
On the other hand, if your table contains 1 million rows and only 2 rows contain the requested value for the time1 column, your query is extremely inefficient. Cassandra will load 999, 998 rows for nothing. If the query is often used, it is probably better to add an index on the time1 column.
Unfortunately, Cassandra has no way to differentiate between the 2 cases above as they are depending on the data distribution of the table. Cassandra is therefore warning you and relying on you to make the good choice.
Thanks,
Harry

In Cassandra, how to implement a fixed number of rows in one partition?

All
I'm implementing a kind of history table using Cassandra 2.2.
My current schema has a row key for userid, and cluster key for timestamp, then in each row is a user behavior record. I want to keep only 10 latest rows for an given userid. How can I implement this smartly?
Thanks for any suggestion!
Given a Data model of:
CREATE TABLE history (
userid text,
activity_time timeuuid,
behavior text,
PRIMARY KEY ((userid),timeuuid)
);
The best I can think of would be to do the following:
Insert all "history" records with some reasonable TTL.
How long of a TTL depends on your particular use case
When querying by a userid, limit your returned result set to 10
SELECT * FROM history WHERE userid='fromanator' LIMIT 10;
However with this approach if a user hasn't had any history within the TTL then you will get no results back. Depending on your use case this may be acceptable.
If you absolutely need to keep at least the last 10 records, then you're going to have a much more complicated data model and application code to achieve this in Cassandra.
This may not be the most elegant solution and won't strictly adhere to only storing 10 records at any given time, but you could store the row data as a list (if there is structure to the row data, you'd have to handle this structuring yourself or use user defined types). If you already have this list available to you when you write to it, you'd just truncate it to the latest 10 values before writing, otherwise you could wait until the next time a read is done on that list, truncate it to 10 records, then write that back to Cassandra.
If you're not so much concerned with how much data is stored, but rather are only interested in retrieving the last 10 results, then fromanator's solution (with or without a TTL depending on whether you care more about the size of the data or ensuring 10 results) is the best.

Why am I reading many tombstones in Cassandra table although my access pattern should avoid them

I know this is not the best way to use Cassandra, but the type of my data requires reading all data from the last week. However when using Collection-types in CQL3, I ran into certain limitations which prevent me from doing normal date-range queries.
So I have set up Cassandra (currently single node, probably more in the future) with the following table
CREATE TABLE cache (tag text, id int, tags map<text,text>,
PRIMARY KEY (tag, id) );
ALTER TABLE cache WITH GC_GRACE_SECONDS = 0;
I am inserting with a TTL of one week to automatically remove the items from the Cache.
I tried to follow the suggestions mentioned in this article to avoid reading many tombstones by selecting by "minimum id", which I persist elsewhere to avoid reading old data:
SELECT * FROM cache WHERE tag = ? AND id >= ?
The id is basically some sort of timestamp which is constantly increasing, i.e. I only insert higher values over time and constantly remove older ids from the table.
But I still get warnings about thresholds being reached
WARN 08:59:06,286 Read 5001 live and 5702 tombstoned cells in cache (see tombstone_warn_threshold)
And if I do not run manual compaction/scrubbing regularly I get exceptions and queries fail.
However based on my understanding from the articles and documentation, I should be avoiding most if not all tombstones here as I query on equality for the tag, which allows Cassandra to only look for those areas and I use a minimum id which allows Cassandra to start reading only after most of the tombstones, so why are there still tombstone warnings/exceptions reported?
Map k/v pair is actually a column (name, value and timestamp): so, if you are issuing a lot of deletions of map elements (expiring by TTL is also the case) -- this is the source of this warning. Because you are still reading full maps (with lots of tombstones in them). Also, TTL setting on map is applied on per-element basis.
Second, this is multiplied by >= predicate in your select query.
If this is the case, you should remodel your data access pattern to use only EQ relations in SELECT query and bump id more often. Also, this access pattern will allow you to get rid of clustering part of your PRIMARY KEY.
So, if you do not issue lots of deletions on that map, you can try to use tag text, time timeuuid, name text, data text model and slice it precisely by time.

Frequent Updates to Solr Documents - Efficiency/Scalability concerns

I have a Solr index with document fields something like:
id, body_text, date, num_upvotes, num_downvotes
In my application, a document is created with some integer id and some body_text (500 chars max). The date is set to the time of input, and num_upvotes and num_downvotes begin at 0.
My application gives users the ability to upvote and downvote the content mentioned above, and the reason I want to keep track of this in Solr instead of just the DB is that I want to be able to consider the number of upvotes and downvotes into my search.
This is a problem because you can't simply update a solr document (i.e. increment number of up_votes) and you must replace the entire document, which is probably fairly inefficient considering it would require hitting my DB to grab all the relevant data again.
I realize the solution may require a different layout of data, or possibly multiple indexes (although I don't know if you can query/score across solr cores).
Is anyone able to offer any recommendations on how to tackle this?
A solution that I use in a similar problem is to update that information in database and do SOLR Updates/Inserts every ten minutes using the documents that were modified since the last update.
Also every night, when I don't have much traffic I do index optimize.
After each import I set up some warm-up queries in SOLR config.
In my SOLR index I have around 1.5 milion documents,each document has 24 fields, and around 2000 characters in the entire document.
I update the index every 10 minutes around 500 documents ( without optimizing the index ), and I do around 50 warmup queries comprised of most common facets, most used filter queries and free text search.
I don't get negative impact on performance. ( at least it is not visible ) - my queries run average in 0.1 seconds. ( before doing update at every 10 minutes average queries were 0.09 seconds)
LATER EDIT:
I didn't encounter any problems during this updates. I allways take the documents from database and insert them with a Unique key to SOLR. If the document exist in SOLR it is replaced ( this is what I mean by update).
It never takes more than 3 minutes to update SOLR. Actually I am doing 10 minutes break after each update. So I start the update of the index, I wait for it to finish, and then I wait another 10 minutes to start again.
I did not look on the performance over the night, but for me it is not relevant, as I want to have fresh information of data during the users visits peaks.
The Join feature would help you here. Then you could store the up/down votes in a separate document.
The bad news is that you need to wait until Solr 4 unless you're comfortable running with a trunk build.
If you are only going to be updating the up/down votes. Instead of going back to the database, just use the appropriate Solr Client for your application and pull the document from the index, set the up/down values as needed and then reinsert the document back into the index.
There is no solution to your problem within SOLR. You have a database problem and you are trying to solve it with a search engine.
The best way to deal with this is to keep a redis database that records the document id from SOLR and the up/down vote counts. Then your app can merge the data from both sources before displaying.

Resources