I am trying to implement moving average for a dataset containing a number of time series. Each column represents one parameter being measured, while one row contains all parameters measured in a second. So a row would look something like:
timestamp, parameter1, parameter2, ..., parameterN
I found a way to do something like that using window functions, but the following bugs me:
Partitioning Specification: controls which rows will be in the same partition with the given row. Also, the user might want to make sure all rows having the same value for the category column are collected to the same machine before ordering and calculating the frame. If no partitioning specification is given, then all data must be collected to a single machine.
The thing is, I don't have anything to partition by. So can I use this method to calculate moving average without the risk of collecting all the data on a single machine? If not, what is a better way to do it?
Every nontrivial Spark job demands partitioning. There is just no way around it if you want your jobs to finish before the apocalypse. The question is simple: When it comes time to do the inevitable aggregation (in your case, an average), how can you partition your data in such a way as to minimize shuffle by grouping as much related data as possible on the same machine?
My experience with moving averages is with stocks. In that case it's easy; the partition would be on the stock ticker symbol. After all, the calculation of the 50-Day Moving Average for Stock A has nothing to with that for Stock B, so those data don't need to be on the same machine. The obvious partition makes this simpler than your situation--not to mention that it only requires one data point (probably) per day (the closing price of the stock at the end of trading) while you have one per second.
So I can only say that you need to consider adding a feature to your data set whose sole purpose is to serve as a partition key even if it is irrelevant to what you're measuring. I would be surprised if there isn't one, but if not, then consider a time-based partition on days for example.
Related
I have a pretty straightforward pyspark SQL application (spark 2.4.4, EMR 5.29) that reads a dataframe of the schema topic, year, count:
df.show()
+--------+----+------+
| topic|year| count|
+--------+----+------+
|covid-19|2017|606498|
|covid-19|2016|454678|
|covid-19|2011| 10517|
|covid-19|2008| 6193|
|covid-19|2015|510391|
|covid-19|2013| 29551|
I then need to sort by year and collect counts to a list so that they be in ascending order, by year:
df.orderBy('year').groupBy('topic').agg(collect_list('count').alias('counts'))
The issue is, since I order by year, the number of partitions used for this stage is the number of years in my dataset. I thus get a crazy bottleneck stage where 15 out of 300 executors are utilised, leading to obvious memory spills and disk spills, eventually failing the stage due to no space left on device for the overpopulated partitions.
Even more interesting is that I found a way to circumvent this which intuitively appears to be much less efficient, but actually does work, since no bottlenecks are created:
df.groupBy('topic').pivot('year', values=range(START, FINISH)).agg(first('count')) \
.select('topic', array([col(c) for c in range(START, FINISH)]).alias('counts'))
This leads to my desired output, which is an array of counts sorted by year.
Anyone with an explanation or idea why this happens, or how best to prevent this?
I found this answer which and this jira where it is basically suggested to 'add noise' to the sort by key to avoid these skew related issues.
I think it is worth mentioning that the pivot method is a better resolution than adding noise, and to my knowledge whenever sorting by a column that has a small range of values. would appreciate any info on this and alternate implementations.
Range Partitioning is used for Sorting, ordering, under water by Spark.
From the docs it is clear that the calculation for determining the number of partitions that will contain ranges of data for sorting subsequently via mapPartitions,
is based on sampling from the existing partitions prior to computing some heuristically optimal number of partitions for these computed ranges.
These ranges which are partitions may decrease the number of partitions as a range must be contained with a single partition - for the order by / sort to work. Via mapPartitions type approach.
This:
df.repartitionByRange(100, 'some_col1', 'some_colN')...
can help or of you order by more columns I suspect. But here it appears not to be the case based on your DF.
The question has nothing to do with pyspark, BTW.
Interesting point, but explainable: reduced partitions needing to hold more data via collect_list based on year, there are obviously more topics than years.
I am in the process of learning Cassandra as an alternative to SQL databases for one of the projects I am working for, that involves Big Data.
For the purpose of learning, I've been watching the videos offered by DataStax, more specifically DS220 which covers modeling data in Cassandra.
While watching one of the videos in the course series I was introduced to the concept of splitting partitions to manage partition size.
My current understanding is that Cassandra has a max logical capacity of 2B entries per partition, but a suggested max of a couple 100s MB per partition.
I'm currently dealing with large amounts of real-time financial data that I must store (time series), meaning I can easily fill out GBs worth of data in a day.
The video course talks about introducing an additional partition key in order to split a partition with the purpose or reducing the size per partition requirement.
The video pointed out to using either a time based key or an arbitrary "bucket" key that gets incremented when the number of manageable rows has been reached.
With that in mind, this led me to the following problem: given that partition keys are only used as equality criteria (ie. point to the partition to find records), how do I find all the records that end up being spread across multiple partitions without having to specify either the bucket or timestamp key?
For example, I may receive 1M records in a single day, which would likely go over the 100-500Mb partition limit, so I wouldn't be able to set a partition on a per date basis, that means that my daily data would be broken down into hourly partitions, or alternatively, into "bucketed" partitions (for balanced partition sizes). This means that all my daily data would be spread across multiple partitions splits.
Given this scenario, how do I go about querying for all records for a given day? (additional clustering keys could include a symbol for which I want to have the results for, or I want all the records for that specific day)
Any help would be greatly appreciated.
Thank you.
Basically this goes down to choosing right resolution for your data. I would say first step for you would be to determinate what is best fit for your data. Lets for sake of example take 1 hour as something that is good and question is how to fetch all records for particular date.
Your application logic will be slightly more complicated since you are trading simplicity for ability to store large amounts of data in distributed fashion. You take date which you need and issue 24 queries in a loop and glue data on application level. However when you glue that in can be huge (I do not know your presentation or export requirements so this can pull 1M to memory).
Other idea can be having one table as simple lookup table which has key of date and values of partition keys having financial data for that date. Than when you read you go first to lookup table to get keys and then to partitions having results. You can also store counter of values per partition key so you know what amount of data you expect.
All in all it is best to figure out some natural bucket in your data set and add it to date (organization, zip code etc.) and you can use trick with additional lookup table. This approach can be used for symbol you mentioned. You can have symbols as partition keys, clustering per date and values of partitions having results for that date as values. Than you query for symbol # on 29-10-2015 and you see partitions A, D and Z have results so you go to those partitions and get financial data from them and glue it together on application level.
We are evaluating if we can migrate from SQL SERVER to cassandra for OLAP. As per the internal storage structure we can have wide rows. We almost need to access data by the date. We often need to access data within date range as we have financial data. If we use date as Partition key to support filter by date,we end up having less row with huge number of columns.
Will it hamper performance if we have millions of columns for a single row key in future as we process millions of transactions every day.
Do we need to have some changes in the access pattern to have more rows with less number of columns per row.
Need some performance insight to proceed in either direction
Using wide rows is typically fine with Cassandra, there are however a few things to consider:
Ensure that you don't reach the 2 billion column limit in any case
The whole wide row is stored on the same node: it needs to fit on the disk. Also, if you have some dates that are accessed more frequently then other dates (e.g. today) then you can create hotspots on the node that stores the data for that day.
Very wide rows can affect performance however: Aaron Morton from The Last Pickle has an interesting article about this: http://thelastpickle.com/blog/2011/07/04/Cassandra-Query-Plans.html
It is somewhat old, but I believe that the concepts are still valid.
For a good table design decision one needs to know all typical filter conditions. If you have any other fields you typically filter for as an exact match, you could add them to the partition key as well.
Is there a good way to delete entities that are in the same partition given a row key range? It looks like the only way to do this would be to do a range lookup and then batch the deletes after looking them up. I'll know my range at the time that entities will be deleted so I'd rather skip the lookup.
I want to be able to delete things to keep my partitions from getting too big. As far as I know a single partition cannot be scaled across multiple servers. Each partition is going to represent a type of message that a user sends. There will probably be less than 50 types. I need a way to show all the messages of each type that were sent (ex: show recent messages regardless of who sent it of type 0). This is why I plan to make the type the partition key. Since the types don't scale with the number of users/messages though I don't want to let each partition grow indefinitely.
Unfortunately, you need to know precise Partition Keys and Row Keys in order to issue deletes. You do not need to retrieve entities from storage if you know precise RowKeys, but you do need to have them in order to issue batch delete. There is no magic "Delete from table where partitionkey = 10" command like there is in SQL.
However, consider breaking your data up into tables that represent archivable time units. For example in AzureWatch we store all of the metric data into tables that represent one month of data. IE: Metrics201401, Metrics201402, etc. Thus, when it comes time to archive, a full table is purged for a particular month.
The obvious downside of this approach is the need to "union" data from multiple tables if your queries span wide time ranges. However, if your keep your time ranges to minimum quantity, amount of unions will not be as big. Basically, this approach allows you to utilize table name as another partitioning opportunity.
I am looking at creating a Cassandra timeseries database for storing millions of series of daily data that can potentially have altogether up to 100B data points.
I looked at this article:
http://rubyscale.com/blog/2011/03/06/basic-time-series-with-cassandra/
This design is very sound. So essentially I can put the daily timestamps as columns and if necessary shard the columns by appending the day to the row.
Two questions I have:
I am looking at storing up to 20,000 timestamped (daily) columns. Is it even necessary to shard rows by eg. year with this amount of columns? Is there any advantage/disadvantage to sharding rows to reduce the number of columns down to 365 per year.
Another idea I have is to rather than sharding columns by row is to create column family per each year. This way when accessing the data from multiple years I would have to query multiple column families rather than one column family and join the results on the client side. Would this approach speed things up or rather slow everything down?
If you are ever going to manage huge quantities of writes there is one problem with your approach.
Writing always to 1 key means that all writes for that key will go to one node. Basically you will use one node per day out of your cluster, so you might as well have one huge instance of Cassandra rather than bother setting up a cluster.
If your write frequency gets really high you might bring down the nodes responsible for that day/key.
My advise is to bucket one day in multiple rows that are used simultaneously. Time bucketing could be dangerous as a sudden surge during one bucket could bring everything down.
you could create your bucket (row key) like this :
[ROW_BASE_NAME] + [DAY] + someHashFunction(timestamp) % 10
[ROW_BASE_NAME] + [DAY] + random.nextInt(10)
[ROW_BASE_NAME] + [DAY] + nextbucket <--- that is if you have a secure way to rotate the bucket yourself
There is many ways to do it. You could also use some element of the column being saved to do that.
But I think it should be important to do that in order to leverage the whole cassandra cluster at all times.
My answer is only valid for Write heavy application/functionality since you will have to use a multi_get (multiple keys whole row reads) to read all the data and reconstitute the whole time line for that day.