How do I make a column name in a table a Primary Key with Persistent? - persistent

Here's my model
>Test
testID Int
product Text
firmware Text
startDate Day
estimatedFinishDate Day
status Text
UniqueStartDate startDate
UniqueEstimatedFinishDate estimatedFinishDate
UniqueTestID testID
I would like testID to be the primary key. How do I do that?

Like so, I believe:
Test id=testID
product Text
etc.
with persistent >= 0.6.1. (commit)

Related

Update column value in Cassandra table if value exists

I have a Cassandra table as below
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty float,
PRIMARY KEY (prodid)
) ;
Requirement :
For the provided primary key, if no record exists in table, we need to insert, which is straight forward. but when the record exists for the primary key, then we need to update the qty column by adding the existing value in the table with new values received.
As per my understanding, I need to query the table first for the provided primary key and get the value of the qty column and add with new value received from the request and execute the update query with light weight transaction.
Ex: table has say qty 10 for the prodid=1 and if I receive from user new qty as 2 (which is delta), then I need to update qty as 12 for the prodid=1.
Is that logic is correct? or any better way to design the table or handle the use case? Will this approach introduce latency issue during the load as we need to do select query first and if data exists update the column value with new value ? Please help.
You can change the qty column to static. This way you do not have to update the table but Insert. Updates are resource intensive so cassandra treats UPDATE statement as insert statement. So, your table definition should be -
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty float static,
PRIMARY KEY (prodid) ) ;
So you can use your business logic to calculate the new value of QTY column and use INSERT statement, which intern update the same column.
Other way is to use counter column -
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty counter,
PRIMARY KEY (prodid, loc ) ) ;
Which this design you can just use update query like below -
update inventory set qty = qty + <calculated Quantity> where prodid = 1;
Notice that, in second table design, all other columns have to the part of primary key. In your case, it is easy and convenient.

Sqlite Autoincrementing a field in a table

I have a table PatientRegistration with Columns
ID INTEGER PRIMARY KEY AUTOINCREMENT,
NAME VARCHAR,
PHONE TEXT,
AGE BLOB,
TURNNUMBER INT,
REG_DATE DATE TIME
I have been having troubles trying to automatically declare TURNNUMBER and assign it 0(zero) if it's null else increase it by +1 with respect to date, this is correct but only in SQL
declare #turn tinyint,
#turn=isnull((select top 1 pa.turn from PatientRegistration pa where Day(pa.Reg_Date)=#day order by pa.ID desc),0)+1
How can I achieve this using SQLite,,,, please
Edit: this should happen each a new patient is added

How to model for word search in cassandra

my model design to save word search from checkbox and it must have update word search and status, delete(fake). my old model set pk is uuid(id of word search) and set index is status (enable, disable, deleted)
but I don't want to set index at status column(I think its very bad to set index at update column) and I don't change database
Is it have better way for model this?
sorry for my english grammar
You should not create index on very low cardinality column status
Avoid very low cardinality index e.g. index where the number of distinct values is very low. A good example is an index on the gender of an user. On each node, the whole user population will be distributed on only 2 different partitions for the index: MALE & FEMALE. If the number of users per node is very dense (e.g. millions) we’ll have very wide partitions for MALE & FEMALE index, which is bad
Source : https://www.datastax.com/dev/blog/cassandra-native-secondary-index-deep-dive
Best way to handle this type of case :
Create separate table for each type of status
Or Status with a known parameter (year, month etc) as partition key
Example of 2nd Option
CREATE TABLE save_search (
year int,
status int,
uuid uuid,
category text,
word_search text,
PRIMARY KEY((year, status), uuid)
);
Here you can see that i have made a composite partition key with year and status, because of low cardinality issue. If you think huge data will be in a single status then you should also add month as the part of composite partition key
If your dataset is small you can just remove the year field.
CREATE TABLE save_search (
status int,
uuid uuid,
category text,
word_search text,
PRIMARY KEY(status, uuid)
);
Or
If you are using cassandra version 3.x or above then you can use materialized view
CREATE MATERIALIZED VIEW search_by_status AS
SELECT *
FROM your_main_table
WHERE uuid IS NOT NULL AND status IS NOT NULL
PRIMARY KEY (status, uuid);
You can query with status like :
SELECT * FROM search_by_status WHERE status = 0;
All the deleting, updating and inserting you made on your main table cassandra will sync it with the materialized view

Cassandra how to model table with an ordered column that will change?

I am trying to model a table of content which has a timestamp, ordered by the timestamp. However I want that timestamp to change if a user decides to edit the content, (so that the content reappears at the top of the list).
I know that you can't change a primary key column so I'm at a loss of how something like this would be structured. Below is a sample table.
CREATE TABLE content(
id uuid
category text
last_update_time timestamp
PRIMARY KEY((category, id),last_update_time)) WITH CLUSTERING ORDER BY (last_update_time);
How should I model this table if I want the data to be ordered by a column that can change?
2 solutions
1) If you don't care having update history
CREATE TABLE content(
id uuid
category text
last_update_time timestamp
PRIMARY KEY((category, id))
// Retrieve last update
SELECT * FROM content WHERE category = 'xxx' AND id = yyy;
2) If you want to keep an history of updates
CREATE TABLE content(
id uuid
category text
last_update_time timestamp
PRIMARY KEY((category, id),last_update_time)) WITH CLUSTERING ORDER BY (last_update_time DESC);
// Retrieve last update
SELECT * FROM content WHERE category = 'xxx' AND id = yyy LIMIT 1;

Cassandra CQL: Filter the rows between a range of values

The structure of my column family is something like
CREATE TABLE product (
id UUID PRIMARY KEY,
product_name text,
product_code text,
status text,//in stock, out of stock
mfg_date timestamp,
exp_date timestamp
);
Secondary Index is created on status, mfg_date, product_code and exp_date fields.
I want to select the list of products whose status is IS (In Stock) and the manufactured date is between timestamp xxxx to xxxx.
So I tried the following query.
SELECT * FROM product where status='IS' and mfg_date>= xxxxxxxxx and mfg_date<= xxxxxxxxxx LIMIT 50 ALLOW FILTERING;
It throws error like No indexed columns present in by-columns clause with "equals" operator.
Is there anything I need to change in the structure? Please help me out. Thanks in Advance.
cassandra is not supporting >= so you have to change the value and have to use only >(greater then) and <(lessthen) for executing query.
You should have at least one "equals" operator on one of the indexed or primary key column fields in your where clause, i.e. "mfg_date = xxxxx"

Resources