I have a table PatientRegistration with Columns
ID INTEGER PRIMARY KEY AUTOINCREMENT,
NAME VARCHAR,
PHONE TEXT,
AGE BLOB,
TURNNUMBER INT,
REG_DATE DATE TIME
I have been having troubles trying to automatically declare TURNNUMBER and assign it 0(zero) if it's null else increase it by +1 with respect to date, this is correct but only in SQL
declare #turn tinyint,
#turn=isnull((select top 1 pa.turn from PatientRegistration pa where Day(pa.Reg_Date)=#day order by pa.ID desc),0)+1
How can I achieve this using SQLite,,,, please
Edit: this should happen each a new patient is added
Related
I have a Cassandra table as below
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty float,
PRIMARY KEY (prodid)
) ;
Requirement :
For the provided primary key, if no record exists in table, we need to insert, which is straight forward. but when the record exists for the primary key, then we need to update the qty column by adding the existing value in the table with new values received.
As per my understanding, I need to query the table first for the provided primary key and get the value of the qty column and add with new value received from the request and execute the update query with light weight transaction.
Ex: table has say qty 10 for the prodid=1 and if I receive from user new qty as 2 (which is delta), then I need to update qty as 12 for the prodid=1.
Is that logic is correct? or any better way to design the table or handle the use case? Will this approach introduce latency issue during the load as we need to do select query first and if data exists update the column value with new value ? Please help.
You can change the qty column to static. This way you do not have to update the table but Insert. Updates are resource intensive so cassandra treats UPDATE statement as insert statement. So, your table definition should be -
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty float static,
PRIMARY KEY (prodid) ) ;
So you can use your business logic to calculate the new value of QTY column and use INSERT statement, which intern update the same column.
Other way is to use counter column -
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty counter,
PRIMARY KEY (prodid, loc ) ) ;
Which this design you can just use update query like below -
update inventory set qty = qty + <calculated Quantity> where prodid = 1;
Notice that, in second table design, all other columns have to the part of primary key. In your case, it is easy and convenient.
I have Cassandra table:
CREATE TABLE test (
network_id int,
date date,
score float,
id uuid,
user_id int,
user_name text,
PRIMARY KEY ((network_id, date), score, id))
WITH CLUSTERING ORDER BY (score DESC);
Query which I need to satisfy is:
"Give me all users which belongs to specific network for specific day sorted by score."
The problem is when user change his name (today) and when I have to execute query for some day in past my report will show old version of the name.
Changing column user_name to STATIC doesn't work because my table should be partitioned by day.
Any ideas how to solve this?
Thank You.
Since you have denormalized user_name for faster access, If the user_name updated you have to update all the copy of that user_name.
You need to maintain another table
CREATE TABLE network_by_user_id (
user_id int,
network_id int,
date date,
score float,
id uuid,
PRIMARY KEY (user_id, network_id, date, score, id)
);
So now whenever any user update their name you have to select all the record of that user from network_by_user_id table and for each record update user_name of base table
update test set user_name = 'New Name' where network_id = ? and date = ? and score = ? and id = ?
If the number of record for a user fastly increase over time, then the cost of update user_name will also fastly increase over time.
Another approach is to normalize the base table like below :
CREATE TABLE test (
network_id int,
date date,
score float,
id uuid,
user_id int,
PRIMARY KEY ((network_id, date), score, id)
);
CREATE TABLE users (
user_id int,
user_name text,
PRIMARY KEY (user_id)
);
For each user_id found in the base table you can query into users with execute async to get the user_name
Learn More about executeAsync
you can use SELECT command if you want to get any data from your Table
I am trying to "Upsert" data into my table with CQLSSTableWriter. Everything works fine, except for my static column not being set correctly. They end up being null for every occasion. My static column is defined as brand TEXT static.
After failing with the CQLSSTableWriter, I went into the cqlsh and tried to update the static column manually:
update keyspace.data set brand='Nestle' where id = 'whatever' and date = '2015-10-07';
and with a batch as well (even though it should not matter)
begin batch
update keyspace.data set brand='Nestle' where id = 'whatever' and date = '2015-10-07';
apply batch;
My "brand" column still shows null when I retrieve some of my data (select * from keyspace.data LIMIT 100;)
My entire schema:
CREATE TABLE keyspace.data (
id text,
date text,
ts timestamp,
id_two text,
brand text static,
latitude double,
longitude double,
signals_double map<text, double>,
signals_string map<text, text>,
name text static,
PRIMARY KEY ((id, date), ts, id_two)
) WITH CLUSTERING ORDER BY (ts ASC, id_two ASC);
The reason why I chose Update instead of Insert is because I have collections that I do not want to overwrite, but rather add more elements to. Using insert would overwrite the previously stored elements of my collections.
Why can I not set a static column with an Update query?
I attempted to create a table with counter as one of the column type in cassandra but getting the following error:
ConfigurationException: ErrorMessage code=2300 [Query invalid because
of configuration issue] message="Cannot add a counter column
(transaction_count) in a non counter column family"
My table schema is as follows:
CREATE TABLE MARKET_DATA_TRANSACTION_COUNT (
TRADE_DATE TIMESTAMP,
SECURITY_EXCHANGE TEXT,
PRODUCT_CODE TEXT,
SYMBOL TEXT,
SPREAD_TYPE TEXT,
USER_DEFINED TEXT,
PRODUCT_GUID TEXT,
CHANNEL_ID INT,
SECURITY_TYPE TEXT,
INSTRUMENT_GUID TEXT,
SECURITY_ID INT,
TRANSACTION_COUNT COUNTER,
PRIMARY KEY (TRADE_DATE));
That's a limitation of the current counter implementation. You can't mix counters and regular columns in the same table. So you need a separate table for counters.
They are thinking of removing this limitation in Cassandra 3.x. See this Jira ticket.
This is not exactly the answer to the question, might help some people with the similar error.
If you can make other columns as PRIMARY KEY then its possible.
Eg: CREATE TABLE rate_data (ts varchar, type varchar, rate counter, PRIMARY KEY (ts, type));
I have columnfamily with composite key like this
CREATE TABLE sometable(
keya varchar,
keyb varchar,
keyc varchar,
keyd varchar,
value int,
date timestamp,
PRIMARY KEY (keya,keyb,keyc,keyd,date)
);
What I need to do is to
SELECT * FROM sometable
WHERE
keya = 'abc' AND
keyb = 'def' AND
date < '2014-01-01'
And that is giving me this error
Bad Request: PRIMARY KEY part date cannot be restricted (preceding part keyd is either not restricted or by a non-EQ relation)
What's the best way to solve this? Do I need to alter my columnfamily?
I also need to query those table with all keya, keyb, keyc, and date.
You cannot do it in cassandra. Moreover, such a range slicing is costlier too. You are trying to slice through a set of equalities that have the lower priority according to your schema.
I also need to query those table with all keya, keyb, keyc, and date.
If you are considering to solve this problem, considering having this schema. What i would suggest is to have the keys in a separate schema
create table (
timeuuid id,
keyType text,
primary key (timeuuid,keyType))
Use the timeuuid to store the values and do a range scan based on that.
create table(
timeuuid prevTableId,
value int,
date timestamp,
primary key(prevTableId,date))
Guess , in this way, your table is normalized for better scalability in your use case and may save a lot of disk space if keys are repetitive too.