Lately I've been working in a speculative machine learning space that frequently requires tables that are roughly a million rows long to be transposed. This can be done in a few hours in-memory, but I'd like to save the resulting transposition to a delta table. For some reason, write speeds are terrible - even trying to write one row with 1 million columns to a delta table to benchmark, I gave up after an hour. Yet one column with 1 million rows takes no time at all. Yes, columnar formats are going to handle the latter case better, but that much better? Is there something I'm missing here?
Related
I am running this on AWS Athena based on PrestoDB. My original plan was to query data 3 months in the past to analyze that data. However, even the query times for 2 hours in the past takes more than 30 minutes, at which point the Query times out. Is there any more efficient way for the query to be carried out?
SELECT column1, dt, column 2
FROM database1
WHERE date_parse(dt, '%Y%m%d%H%i%s') > CAST(now() - interval '1' hour AS timestamp)
The date column is recorded in the form of a string YYYYmmddhhmmss
Likely, the problem is that the query applies a function on the column being filtered. This is inefficient, becase the database needs to convert the entire column before it is able to filter it. One says that this predicate is non-SARGable.
Your primary effort should go into fixing your data model and store dates as dates rather than strings.
That said, the string format that you are using to represent dates still makes it possible to use direct filtering. The idea is to convert the filter value to the target string format (rather than converting the column value to a date):
where dt > date_format(now() - interval '1' hour, '%Y%m%d%H%i%s')
There are a lot of different factors that influence the time it takes for Athena to execute a query. The amount of data is usually dominates, but other important factors are data format (there's a huge difference between CSV and Parquet for example), and the number of files. In contrast to many other new database situations the complexity of the query is less often an important factor, and your query is very straightforward and is not the problem (it doesn't help that you apply a function in both sides of the WHERE condition, but it's not a big deal in Athena since the filtering is brute force and applying a function on each row isn't that big a deal compared to IO in an engine like Athena.
If you provide more information about the number of files, the data format, and so on we can probably help you better, because without that kind of information it could be just about anything. My suspicion is that you have something like a single prefix with tens or hundreds of millions of files – this is the worst possible case for Athena.
When Athena plans a query it lists the table's location on S3. S3's list operation has a page size of 1000, so if there are more files than that Athena will have to list sequentially until it gets the full listing. This cannot be parallelised, and it's also not very fast.
You need to avoid, almost at all cost, having more than 1000 files in the same prefix. If you have more files than that you can add prefixes (directories), because Athena will list S3 as if it was a file system, and parallelise listings of prefixes. A 1000 files each in table-data/a/, table-data/b/, table-data/c/ is much better than 3000 files in table-data/.
The reason why I suspect it's lots of small files rather than a lot of data is that if it was a lot of data you would probably have said so – and lots of data is actually something Athena is really good at. Ripping though terabytes of data is no problem unless it's a billion tiny files.
I had a question that is related to pyspark's repartitionBy() function which I originally posted in a comment on this question. I was asked to post it as a separate question, so here it is:
I understand that df.partitionBy(COL) will write all the rows with each value of COL to their own folder, and that each folder will (assuming the rows were previously distributed across all the partitions by some other key) have roughly the same number of files as were previously in the entire table. I find this behavior annoying. If I have a large table with 500 partitions, and I use partitionBy(COL) on some attribute columns, I now have for example 100 folders which each contain 500 (now very small) files.
What I would like is the partitionBy(COL) behavior, but with roughly the same file size and number of files as I had originally.
As demonstration, the previous question shares a toy example where you have a table with 10 partitions and do partitionBy(dayOfWeek) and now you have 70 files because there are 10 in each folder. I would want ~10 files, one for each day, and maybe 2 or 3 for days that have more data.
Can this be easily accomplished? Something like df.write().repartition(COL).partitionBy(COL) seems like it might work, but I worry that (in the case of a very large table which is about to be partitioned into many folders) having to first combine it to some small number of partitions before doing the partitionBy(COL) seems like a bad idea.
Any suggestions are greatly appreciated!
You've got several options. In my code below I'll assume you want to write in parquet, but of course you can change that.
(1) df.repartition(numPartitions, *cols).write.partitionBy(*cols).parquet(writePath)
This will first use hash-based partitioning to ensure that a limited number of values from COL make their way into each partition. Depending on the value you choose for numPartitions, some partitions may be empty while others may be crowded with values -- for anyone not sure why, read this. Then, when you call partitionBy on the DataFrameWriter, each unique value in each partition will be placed in its own individual file.
Warning: this approach can lead to lopsided partition sizes and lopsided task execution times. This happens when values in your column are associated with many rows (e.g., a city column -- the file for New York City might have lots of rows), whereas other values are less numerous (e.g., values for small towns).
(2) df.sort(sortCols).write.parquet(writePath)
This options works great when you want (1) the files you write to be of nearly equal sizes (2) exact control over the number of files written. This approach first globally sorts your data and then finds splits that break up the data into k evenly-sized partitions, where k is specified in the spark config spark.sql.shuffle.partitions. This means that all values with the same values of your sort key are adjacent to each other, but sometimes they'll span a split, and be in different files. This, if your use-case requires all rows with the same key to be in the same partition, then don't use this approach.
There are two extra bonuses: (1) by sorting your data its size on disk can often be reduced (e.g., sorting all events by user_id and then by time will lead to lots of repetition in column values, which aids compression) and (2) if you write to a file format the supports it (like Parquet) then subsequent readers can read data in optimally by using predicate push-down, because the parquet writer will write the MAX and MIN values of each column in the metadata, allowing the reader to skip rows if the query specifies values outside of the partition's (min, max) range.
Note that sorting in Spark is more expensive than just repartitioning and requires an extra stage. Behind the scenes Spark will first determine the splits in one stage, and then shuffle the data into those splits in another stage.
(3) df.rdd.partitionBy(customPartitioner).toDF().write.parquet(writePath)
If you're using spark on Scala, then you can write a customer partitioner, which can get over the annoying gotchas of the hash-based partitioner. Not an option in pySpark, unfortunately. If you really want to write a custom partitioner in pySpark, I've found this is possible, albeit a bit awkward, by using rdd.repartitionAndSortWithinPartitions:
df.rdd \
.keyBy(sort_key_function) \ # Convert to key-value pairs
.repartitionAndSortWithinPartitions(numPartitions=N_WRITE_PARTITIONS,
partitionFunc=part_func) \
.values() # get rid of keys \
.toDF().write.parquet(writePath)
Maybe someone else knows an easier way to use a custom partitioner on a dataframe in pyspark?
df.repartition(COL).write().partitionBy(COL)
will write out one file per partition. This will not work well if one of your partition contains a lot of data. e.g. if one partition contains 100GB of data, Spark will try to write out a 100GB file and your job will probably blow up.
df.repartition(2, COL).write().partitionBy(COL)
will write out a maximum of two files per partition, as described in this answer. This approach works well for datasets that are not very skewed (because the optimal number of files per partition is roughly the same for all partitions).
This answer explains how to write out more files for the partitions that have a lot of data and fewer files for the small partitions.
I have a log table in cassandra, and now I want to search the rows count of the table.
First, I use the select count(*) from log,but it's very, very slow.
Then I want to use the counter type, and then the problem is coming. My table is a TTL table, all rows keep an hour, use the counter type become very difficult.
Cassandra isn't efficient for doing table scan operations. It is good at ingesting high volumes of data and then accessing small slices of that data rather than the whole table.
So if you want to count keys without using a counter, you need to break the table into chunks of data that are small enough to be processed quickly. For example if you want to use count(*), you should only use it on a single partition, and keep the partition size below about 100,000 rows.
In your case you might want to partition your data by hour (or something small like 5 minute intervals if you insert a lot of log lines per second).
Be careful with using a TTL of an hour if you are inserting a lot of data continuously since it could cause a lot of tombstones. To avoid building up tombstones you should delete each hour partition after the hour has passed.
We are evaluating if we can migrate from SQL SERVER to cassandra for OLAP. As per the internal storage structure we can have wide rows. We almost need to access data by the date. We often need to access data within date range as we have financial data. If we use date as Partition key to support filter by date,we end up having less row with huge number of columns.
Will it hamper performance if we have millions of columns for a single row key in future as we process millions of transactions every day.
Do we need to have some changes in the access pattern to have more rows with less number of columns per row.
Need some performance insight to proceed in either direction
Using wide rows is typically fine with Cassandra, there are however a few things to consider:
Ensure that you don't reach the 2 billion column limit in any case
The whole wide row is stored on the same node: it needs to fit on the disk. Also, if you have some dates that are accessed more frequently then other dates (e.g. today) then you can create hotspots on the node that stores the data for that day.
Very wide rows can affect performance however: Aaron Morton from The Last Pickle has an interesting article about this: http://thelastpickle.com/blog/2011/07/04/Cassandra-Query-Plans.html
It is somewhat old, but I believe that the concepts are still valid.
For a good table design decision one needs to know all typical filter conditions. If you have any other fields you typically filter for as an exact match, you could add them to the partition key as well.
I am looking at creating a Cassandra timeseries database for storing millions of series of daily data that can potentially have altogether up to 100B data points.
I looked at this article:
http://rubyscale.com/blog/2011/03/06/basic-time-series-with-cassandra/
This design is very sound. So essentially I can put the daily timestamps as columns and if necessary shard the columns by appending the day to the row.
Two questions I have:
I am looking at storing up to 20,000 timestamped (daily) columns. Is it even necessary to shard rows by eg. year with this amount of columns? Is there any advantage/disadvantage to sharding rows to reduce the number of columns down to 365 per year.
Another idea I have is to rather than sharding columns by row is to create column family per each year. This way when accessing the data from multiple years I would have to query multiple column families rather than one column family and join the results on the client side. Would this approach speed things up or rather slow everything down?
If you are ever going to manage huge quantities of writes there is one problem with your approach.
Writing always to 1 key means that all writes for that key will go to one node. Basically you will use one node per day out of your cluster, so you might as well have one huge instance of Cassandra rather than bother setting up a cluster.
If your write frequency gets really high you might bring down the nodes responsible for that day/key.
My advise is to bucket one day in multiple rows that are used simultaneously. Time bucketing could be dangerous as a sudden surge during one bucket could bring everything down.
you could create your bucket (row key) like this :
[ROW_BASE_NAME] + [DAY] + someHashFunction(timestamp) % 10
[ROW_BASE_NAME] + [DAY] + random.nextInt(10)
[ROW_BASE_NAME] + [DAY] + nextbucket <--- that is if you have a secure way to rotate the bucket yourself
There is many ways to do it. You could also use some element of the column being saved to do that.
But I think it should be important to do that in order to leverage the whole cassandra cluster at all times.
My answer is only valid for Write heavy application/functionality since you will have to use a multi_get (multiple keys whole row reads) to read all the data and reconstitute the whole time line for that day.