I have a task to create a metadata table for my timeseries cassandra db. This metadata table would like to store over 500 pdf files. Each pdf file comprises of 5-10 MB data.
I have thought of storing them as Blobs. Is cassandra able to do that?
Cassandra isn't a perfect for such blobs and at least datastax recommends to keep them smaller than 1MB for best performance.
But - just try for your self and do some testing. Problems arise when partitions become larger and there are updates in them so the coordinator has much work to do in joining them.
A simple way to go is, store your blob separate as uuid key-value pair in its own table and only store the uuid with your data. When the blob is updated - insert a new one with a new uuid and update your records. With this trick you never have different (and maybe large) versions of your blob and will not suffer that much from performance. I think I read that Walmart did this successfully with images that were partly about 10MB as well as smaller ones.
Just try it out - if you have Cassandra already.
If not you might have a look at Ceph or something similar - but that needs it's own deployment.
You can serialize the file and store them as blob. The cost is deserialization when reading the file back. There are many efficient serialization/deserialization libraries that do this efficiently. Another way is to do what #jasim waheed suggested. However, that will result in network io. So you can decide where you want to pay the cost.
Related
Help me please to understand how can I write data to the place that is also being read from without any issue, using EMR and S3.
So I need to read partitioned data, find old data, delete it, write new data back and I'm thinking about 2 ways here:
Read all data, apply a filter, write data back with save option SaveMode.Overwrite. I see here one major issue - before writing it will delete files in S3, so if EMR cluster goes down by some reason after deletion but before writing - all data will be lost. I can use dynamic partition but that would mean that in such situation I'm gonna lost data from 1 partition.
Same as above but write to the temp directory, then delete original, move everything from temp to original. But as this is S3 storage it doesn't have move operation and all files will be copied, which can be a bit pricy(I'm going to work with 200GB of data).
Is there any other way or am I'm wrong in how spark works?
You are not wrong. The process of deleting a record from a table on EMR/Hadoop is painful in the ways you describe and more. It gets messier with failed jobs, small files, partition swapping, slow metadata operations...
There are several formats, and file protocols that add transactional capability on top of a table stored S3. The open Delta Lake (https://delta.io/) format, supports transactional deletes, updates, merge/upsert and does so very well. You can read & delete (say for GDPR purposes) like you're describing. You'll have a transaction log to track what you've done.
On point 2, as long as you have a reasonable # of files, your costs should be modest, with data charges at ~$23/TB/mo. However, if you end with too many small files, then the API costs of listing the files, fetching files can add up quickly. Managed Delta (from Databricks) will help speed of many of the operations on your tables through compaction, data caching, data skipping, z-ordering
Disclaimer, I work for Databricks....
I tried to store the audio/video files in the database.
Is cassandra able to do that ? if yes, how do we store the media files in cassandra.
How about storing the metadata and original audio files in cassandra
Yes, Cassandra is definitely able to store files in its database, as "blobs", strings of bytes.
However, it is not ideal for this use case:
First, you are limited in blob size. The hard limit is 2GB size, so large videos are out of the question. But worse, the documentation from Datastax (the commercial company behind Cassandra's development) suggests that even 1 MB (!) is too large - see https://docs.datastax.com/en/cql/3.1/cql/cql_reference/blob_r.html.
One of the reasons why huge blobs are a problem is that Cassandra offers no API for fetching parts of them - you need to read (and write) a blob in one CQL operation, which opens up all sorts of problems. So if you want to store large files in Cassandra, you'll probably want to split them up into many small blobs - not one large blob.
The next problem is that some of Cassandra's implementation is inefficient when the database contains files (even if split up to a bunch of smaller blobs). One of the problems is the compaction algorithm, which ends up copying all the data over and over (a logarithmic number of times) on disk; An implementation optimized for storing files would keep the file data and the metadata separately, and only "compact" the metadata. Unfortunately neither Cassandra nor Scylla implement such a file format yet.
All-in-all, you're probably better off storing your metadata in Cassandra but the actual file content in a different object-store implementation.
I'm a newbie to Cassandra. I'm trying to store mutlimedia(photo, video, audio) files in Cassandra using blob. How is do it? there any other alternative to do the same?
Thanks in advance.
It really depends on how large the files are, but large objects and files are more easily stored in MongoDB. With Cassandra you could split the files up into chunks of a smaller size and make a file correspond to a row, with the chunks as column values.
How large are your files? Blobs in cassandra can be in theory 2gb large - but in the real world blobs should be less than a few mb for performance reasons.
You can of course chunk your files up into smaller pieces and reassemble them as needed (ideal while streaming data).
But you can and probably should go polyglot - c* for metadata and some object store as aws s3 or rados on ceph for selfhosting and put just the links to the bulk data to c*.
I want to store and retrieve values from Cassandra which ranges from 50MB to 100MB.
As per documentation, Cassandra works well when the column value size is less than 10MB. Refer here
My table is as below. Is there a different approach to this ?
CREATE TABLE analysis (
prod_id text,
analyzed_time timestamp,
analysis text,
PRIMARY KEY (slno, analyzed_time)
) WITH CLUSTERING ORDER BY (analyzed_time DESC)
As for my own experience, although in theory Cassandra can handle large blobs, in practise it may be really painful. As for one of my past projects, we stored protobuf blobs in C* ranged from 3kb to 100kb, but there were some (~0.001%) of them with size up to 150mb. This caused problems:
Write timeouts. By default C* has 10s write timeout which is really not enough for large blobs.
Read timeouts. The same issue with read timeout, read repair, hinted handoff timeouts and so on. You have to debug all these possible failures and raise all these timeouts. C* has to read the whole heavy row to RAM from disk which is slow.
I personally suggest not to use C* for large blobs as it's not very effective. There are alternatives:
Distributed filesystems like HDFS. Store an URL of the file in C* and file contents in HDFS.
DSE (Commercial C* distro) has it's own distributed FS called CFS on top of C* which can handle large files well.
Rethink your schema in a way to have much lighter rows. But it really depends of your current task (and there's not enough information in original question about it)
Large values can be problematic, as the coordinator needs to buffer each row on heap before returning them to a client to answer a query. There's no way to stream the analysis_text value.
Internally Cassandra is also not optimized to handle such use case very well and you'll have to tweak a lot of settings to avoid problems such as described by shutty.
I am building a simple HTTP service, that stores arbitrary binary objects. The service is backed by Cassandra. It is a simplified version of Amazon's S3. The system must withstand a heavy write load and should be highly available on the write and read path.
The stored data is kind of immutable. It can be deleted, but it cannot be updated. Therefore, data inconsistency is not an issue. The datastore must be able to efficiently expire old data.
The service uses Netflix's Astyanax library, which provides a recipe for storing (large) binary objects in Cassandra.
I see two solution to tackle the problem, which both have pros and cons. For me it is hard to estimate, which way fits Cassandra better.
Single table with TTL
Astyanax automatically chunks large objects into small pieces and stores them into a single table. A TTL is assigned to each blob to expire it after a certain period of time. A compaction run removes blobs, when the TTL is expired.
This solutions works and is pretty straight forward to implement. I started using the SizeTieredCompactionStrategy, but I think, that DateTieredCompactionStrategy might be the better choice, when dealing with TTL data.
My main concern is: can Cassandra's compaction keep up? Has anyone experience with a similar use case?
Sharding data by time
Another approach would be to shard the data by time. I could create a table for each day and store the chunks in that table. In this case I can drop the complete table to get rid of the expired data.
This solution requires a little more effort in the implementation, but simplifies and probably speeds up the deletion of expired data.
How performant is Cassandra in dropping a table?
Correct option for your scenario is DateTieredCompactionStrategy and Assign TTL to each blob.
Refer:
http://www.datastax.com/dev/blog/datetieredcompactionstrategy