Best approach in converting and storing json objects into Apache Cassandra - cassandra

I have an API service where user push arbitrary json objects, these json objects can be nested and others just normal. I am facing some challenges on how to effectively convert the incoming json objects into something much more suitable for storage on cassandra. Advice on how to handle is highly appreciated.
Thanks.

As suggested by #omnibear you should take a look at the linked answer. Basically what you need to figure out before deciding on a solution is the answer to the question: "how do you process each JSON after storing it?". Some possible scenarios:
if you process it as it is, then you can store it as a blob
if you have situations where you need to modify a predefined subset of the attributes of the JSON, then you might want to store those as columns and the rest of the JSON as a blob

Related

How to parse big XML in google cloud function efficiently?

I have to extract data from XML files with the size of several hundreds of MB in a Google Cloud Function and I was wondering if there are any best practices?
Since I am used to nodejs I was looking at some popular libraries like fast-xml-parser but it seems cumbersome if you only want specific data from a huge xml. I am also not sure if there are any performance issues when the XML is too big. Overall this does not feel like the best solution to parse and extract data from huge XMLs.
Then I was wondering if I could use BigQuery for this task where I simple convert the xml to json and throw it into a Dataset where I then can use a query to retrieve the data I want.
Another solution could be to use python for the job since it is good in parsing and extracting data from a XML so even though I have no experience in python I was wondering if this path could still be
the best solution?
If anything above does not make sense or if one solution is preferable to the other or if anyone can share any insights I would highly appreciate it!
I suggest you to check this article in which they discuss how to load XML data into BigQuery using Python Dataflow. I think that this approach may work in your situation.
Basically what they suggest is:
To parse the xml into a Python dictionary using the package xmltodict.
Specify a schema of the output table in BigQuery.
Use a Beam pipeline to take an XML file and use it to populate a BigQuery table.

Couchbase retrieving relational docs in nodeJS

I am still debating which way to go and possibly store certain information in its own doc. so for example the customer can have addresses with each address would be its own doc and then in the customer doc there would be an array of ref keys stored under addresses. The benefit would be i could update these docs simply based on the key value vs having to get the customer doc first, finding the array index of the address and then either modify the whole doc or go and use subdoc to replace the content of the array with the index.
Where i am stuck is how to retrieve those referenced subdoc's. is N1QL the only way to go or does the KV API offer a way to do this short of retrieving the whole customer doc, then looping thru address array and retrieving all referenced docs that way. I know Ottoman offers something like that but i am having an issue with the latest version of SDK 2.6 and Ottoman as its not very well maintained. So hopefully someone can share some insight what and why its the best way.
If you want to rely on key/value, then you'll need to do the multiple lookup as you've described. I'm not very familiar with Ottoman: it might do this for you, but behind the scenes it will still be multiple key/value operations and/or N1QL.
With N1QL, you can perform JOINs, but again, behind the scenes it's going to eventually be pulling documents out by key/value. It just does those extra steps for you. Direct key/value is always going to be the fastest route.
If you are still in the process of deciding whether to split the data amongst multiple documents or "denormalize" the data into a single doc, one thing you should think about is how often you're going to access customer+addresses together and how often you're going to customer/access separately. If you're reading/writing customer+address often, consider putting it in one document. Otherwise, consider putting it in multiple documents.
The third option is to store it both places, or rather "cache" the address data in the customer document. This is tricky, because it could get out of sync if you're not careful. So make sure it's worth it before you go down that road.

Efficient way to store a JSON string in a Cassandra column?

Cassandra newbie question. I'm collecting some data from a social networking site using REST calls. So I end up with the data coming back in JSON format.
The JSON is only one of the columns in my table. I'm trying to figure out what the "best practice" is for storing the JSON string.
First I thought of using the map type, but the JSON contains a mix of strings, numerical types, etc. It doesn't seem like I can declare wildcard types for the map key/value. The JSON string can be quite large, probably over 10KB in size. I could potentially store it as a string, but it seems like that would be inefficient. I would assume this is a common task, so I'm sure there are some general guidelines for how to do this.
I know Cassandra has native support for JSON, but from what I understand, that's mostly used when the entire JSON map matches 1-1 with the database schema. That's not the case for me. The schema has a bunch of columns and the JSON string is just a sort of "payload". Is it better to store the JSON string as a blob or as text? BTW, the Cassandra version is 2.1.5.
Any hints appreciated. Thanks in advance.
In the Cassandra Storage engine there's really not a big difference between a blob and a text, since Cassandra stores text as blobs essentially. And yes the "native" JSON support you speak of is only for when your data model matches your JSON model, and it's only in Cassandra 2.2+.
I would store it as a text type, and you shouldn't have to implement anything to compress your JSON data when sending the data (or handle uncompressing). Since Cassandra's Binary Protocol supports doing transport compression. Also make sure your table is storing the data compressed with the same compression algorithm (I suggest using LZ4 since it's the fastest algo implmeneted) to save on doing compression for each read request. Thus if you configure storing the data compressed and use transport compression, you don't even have to implement either yourself.
You didn't say which Client Driver you're using, but here's the documentation on how to setup Transport Compression for Datastax Java Client Driver.
It depends on how to want to query your JSON. There are 3 possible strategies:
Store as a string
Store as a compressed blob
Store as a blob
Option 1 has the advantage of being human readable when you query your data on command line with cqlsh or if you want to debug data directly live. The drawback is the size of this JSON column (10k)
Option 2 has the advantage to keep the JSON payload small because text elements have a pretty decent compression ration. Drawbacks are: a. you need to take care of compression/decompression client side and b. it's not human readable directly
Option 3 has drawbacks of option 1 (size) and 2 (not human readable)

redis performance, store json object as a string

I need to save a User model, something like:
{ "nickname": "alan",
"email": ...,
"password":...,
...} // and a couple of other fields
Today, I use a Set: users
In this Set, I have a member like user:alan
In this member I have the hash above
This is working fine but I was just wondering if instead of the above approach that could make sense to use the following one:
Still use users Set (to easily get the users (members) list)
In this set only use a key / value storage like:
key: alan
value : the stringify version of the above user hash
Retrieving a record would then be easier (I will then have to Parse it with JSON).
I'm very new to redis and I am not sure what could be the best. What do you think ?
You can use Redis hashes data structure to store your JSON object fields and values. For example your "users" set can still be used as a list which stores all users and your individual JSON object can be stored into hash like this:
db.hmset("user:id", JSON.stringify(jsonObj));
Now you can get by key all users or only specific one (from which you get/set only specified fields/values). Also these two questions are probably related to your scenario.
EDIT: (sorry I didn't realize that we talked about this earlier)
Retrieving a record would then be easier (I will then have to Parse it with JSON).
This is true, but with hash data structure you can get/set only the field/value which you need to work with. Retrieving entire JSON object can result in decrease of performance (depends on how often you do it) if you only want to change part of the object (other thing is that you will need to stringify/parse the object everytime).
One additional merit for JSON over hashes is maintaining type. 123.3 becomes the string "123.3" and depending on library Null/None can accidentally be casted to "null".
Both are a bit tedious as that will require writing a transformer for extracting the strings and converting them back to their expected types.
For space/memory consumption considerations, I've started leaning towards storing just the values as a JSON list ["my_type_version", 123.5, null , ... ] so I didn't have overhead of N * ( sum(len(concat(JSON key names))) which in my case was +60% of Redis's used memory footprint.
bear in mind: Hashes cannot store nested objects, JSON can do it.
Truthfully, either way works fine. The way you store it is a design decision you will need to make. It depends on how you want to retrieve the user information, etc.
In terms of performance, storing the JSON encoded version of the user object will use less memory and take less time for storage/retrieval. That is, JSON parsing is probably faster than retrieving each field from Redis. And, even if not, it is probably more memory efficient. The difference in performance is probably minimal anyway.

Arrays in Azure Table Storage Entities (Non-Byte)

I'm looking to store arrays in Azure Table entities. At present, the only type of array supported natively is byte-array, limited to 64k length. The size is enough, but I'd like to store arrays of longs, doubles and timestamps in an entity.
I can obviously cast multiple bytes to the requested type myself, but I was wondering if there's any best-practice to achieve that.
To clarify, these are fixed length arrays (e.g. 1000 cells) associated with a single key.
I have written a Azure table storage client, called Lucifure Stash, which supports arrays, enums, large data, serialization, public and private properties and fields and more.
You can get it at https://github.com/hocho/LucifureStash
I've been trying to think of a nice way to do this other than the method you've already mentioned, and I'm at a loss. The simplest solution I can come up with is to take the array, binary serialize it and store in a binary array property.
Other options I've come up with but dismissed:
If storing it natively is important, you could keep this information in another child table (I know Azure Tables don't technically have relationships, but that doesn't mean you can't represent this type of thing). The downside of this being that it will be considerably slower than your original.
Take the array, XML serialize it and store it in a string property. This would mean that you could see the contents of your array when using 3rd party data explorer tools and you could run (inefficient) queries that look for an exact match on the contents of the array.
Use Lokad Cloud fat entities to store your data. This essentially takes you're whole object, binary serializes it and splits the results into 64kb blocks across the properties of the table entity. This does solve problems like the one you're experiencing, but you will only be able to access your data using tools that support this framework.
If you have just a key-value collection to store, then you can also check out Azure BLOBs. They can rather efficiently store arrays of up to 25M time-value points per single blob (with a random access within the dataset).
If you choose to store your object in blob storage and need more than one "key" to get it, you can just create an azure table or two or n where you store the key you want to look up and the reference to the exact blob item.
Why don't you store the values as csv strings?
You could serialize your array as a JSON string using the .NET JavaScript serializer:
http://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer.aspx
This class has a "MaxJsonLength" property you could use to ensure your arrays didn't exceed 64K when you were serializing them. And you can use the same class to deserialize your stored objects.

Resources