SQL or Oracle Table structure in Redis - node.js

I am using node and planning to use redis to store data [data will be in the SQL or oracle table format with many fields like ID, name, City, Marks, etc].
Found that we can store only key and value in redis with three data structures [in list, set or sorted set].
Is it possible for me to store like Table name [Key name] : Details
and values like ID : 1, Name : john, Country:Russia,
ID : 2, Name : Rose , Country:US , etc.
Is there any other data structure apart from list, set and sorted set in redis?

Yes. See the docs.
http://redis.io/topics/data-types

You also have the Hash data structure...
Database tables are used to store Entities. A loose definition of an Entity is something that has a unique primary key. In Redis, Entities are usually stored using Hash data structure, where columns in the database become fields in the Hash. The primary key is stored in the key of the hash.
Database tables also store non-entities, such as relationships between Entities. For example one-to-many relationship is typically done using a foreign key. In Redis, such relationships can be modeled in Sets, Lists or SortedSets.

Related

Azure Cosmos db Unique Key on collection

I am trying to create an unique key for an whole collection in Cosmos DB.
So not unique per _pk.
I read this article but here it only writes about Unique key per partition: https://learn.microsoft.com/en-us/azure/cosmos-db/unique-keys.
I Googled a lot but I can't find any result about a uk on collection. Is this even possible? And if it is, is there any documentation about it?
I think the official doc about cosmos db unique key is clearly stated.
I am trying to create an unique key for an whole collection in Cosmos
DB.
Unique keys must be defined when the container is created, and the unique key is scoped to the partition key.
In the same collection there must be possible to store different
objects without an username.
Sparse unique keys are not supported. If values for some unique paths are missing, they are treated as a special null value, which takes part in the uniqueness constraint.
If you do want to make the username field unique in the whole collection across the partitions and even null value is permitted, I think you need to check the uniqueness by yourself before inserting documents into cosmos db.I suggest you using pre-triggers to do the check.
Hope it helps you.

Migrating mySql datatables to Clodant Documents

We are planning to move from MySql to Cloudant NoSql. I want to understand what would be the best approach to do that.
We have 5 different tables--Product (ProductId Primary key), Issues (IssueId primary key, ProductId Foreign key) and Tags (Tag id Primary key, ProductId Foreign key) and Location (LocationId primary key location as foreign key with location in product table) and Policy (policyId primary key, IssueId as primary key).
Now we are thought of two approaches for maintaining documents in Cloudant.
Keep different documents for each row with unique document type per table (for each table one document type ex document types as "product","issues,"tag","location","policy" ).
Keep different document for each row with all relation defined in one document (all documents with type "product" only where maintaining all tags,issues[policies],location per product).
Which approach is better?
The answer really depends on the size and rate of growth of your data. In previous SQL->NoSQL migrations, I've used your second approach (I don't know your exact schema, so I'll guess):
{
_id: "prod1",
name: "My product",
tags: [
"red", "sport", "new"
],
locations: [
{
location_id: "55",
name: "London",
latitude: 51.3,
longitude: 0.1
}
],
issues: [
{
issue_id: "466",
policy_id: "88",
name: "issue name"
}
]
}
This approach allows you to get almost everything about a product in a single Cloudant API call (GET /products/prod1). Such a call will give you all of your primary product data and what would have been joins in a SQL world - in this case arrays of things or arrays of objects.
You may still want another database of locations or policies because you may want to store extra information about those objects in separate collections, but you can store a sub-set of that data (e.g. the location's name and geo position) in the product document. This does mean duplicating some data from your reference "locations" collection inside each product, but leads to greater efficiency at query time (at the expense of making updates to the data more complicated).
It all depends about how you access the data. For speed and efficiency you want to be able retrieve the data you need to render a page in as few API calls as possible. If you keep everything in its own database, then you need to do the joins yourself, because Cloudant doesn't have joins. This would be inefficient because you would need an extra API call for each "join".
There is another way to managed "joins" in Cloudant and this may be suitable if your secondary collections are large e.g. if the number of locations/tags/issues would make the product document size too large.

DYNAMOOSE-Set unique constraint in model level

How to set the unique property to DynamoDB using dynamoose node module which it 'll helps in eliminating duplicate entry?
You can create a table whose schema uses the attribute you want to keep unique as the primary key. Or, to separate business logic from your schema design you can use a content-based key that hashes the unique property using SHA256, and use the hash value as the partition of your table.

Composite key as the partition key for azure table storage

I have a data model which as properties say A,B,C,D..G. This model has a composite key (A,B,C,D). I need to store entities of this data model into azure storage.
Should I concatenate (A+B+C+D) and then then store the result as value of partition key (for faster retrieval operations?).
What is the best practice to choose partition key/row key in such cases?
Should I concatenate (A+B+C+D) and then then store the result as value of partition key (for faster retrieval operations?)
As this official document mentioned about considering queries:
Knowing the queries that you will be using will allow you to determine which properties are important to consider for the PartitionKey. The properties that are used in the queries are candidates for the PartitionKey.
If the entity has more than two key properties, you could use a composite key of concatenated values.
What is the best practice to choose partition key/row key in such cases?
For a better querying performance, you need to consider the properties that used in your queries as candidates for the PartitionKey or RowKey. Here is a simple sample for you to have a better understanding of choosing the PK/RK:
There is a table called Product which has the following properties:
| ID | Name | CategoryID | SubCategoryID | DeliveryType | Price | Status | SalesRegion |
If the query is frequently based on CategoryID and SubCategoryID, we could combine CategoryID_SubCategoryID as the PartitionKey to quickly locate the specific partition and retrieve all the products within the specific Category. For the RowKey, we could just set ID for querying on the specific product ID or SalesRegion_Price_DeliveryType for filtering the products in the order of SalesRegion,Price,DeliveryType.
Additionally, you could follow this tutorial about designing scalable and performant Azure Storage Table.
Check this out:
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage
Looks like a good starting point.

Collection of embedded objects using Cassandra CQL

I am trying to put my domain model into Cassandra using CQL. Let's say I have USER_FAVOURITES table. Each favourites has ID as a PRIMARY KEY. I want to store the list of up to 10 records of multiple fields, field_name, field_location and so on in order.
Is this a good idea to model a table like this
CREATE TABLE USER_FAVOURITES (
fav_id text PRIMARY KEY,
field_name_list list<text>,
field_location_list list<text>
);
And object is going to be constructed from list items of matching indicies (e.g.
Object(field_name_list[3],field_location_list[3]))
I query favourites always together. I may want to add and item to some position, start, end or middle.
Is this a good practice? Doesn't look like, but I am just not sure how to group objects in this case, also when i want to keep them in order by, for example, field_location or more complex ordering rule
I'd suggest the following structure:
CREATE TABLE USER_FAVOURITES (
fav_id text PRIMARY KEY,
favs map<int, blob>
);
This would allow you to get access to any item via index. The value part of map is blob, as one can easily serialize a whole needed object into binary and deserialize later.
My personal suggestion will be don't emphasize too much on cassandra collection, as it is bloom further in future. Though above specified scenario is very much possible and no harm in doing so.

Resources