I have a table ATM_Plan
CREATE TABLE ATM_PLAN
(
BRACNH VARCHAR2(4) Primary Key,
SAMITY_CODE VARCHAR2(4),
SAMITY_NAME VARCHAR2(30),
INT_CLS_MONTH DATE,
TTL_MEMBER NUMBER,
TTL_LONEE NUMBER,
CM_TRG_DT DATE,
ACT_CM_DT DATE,
USER_CODE VARCHAR2(5));
Sample Record
insert into ATM_PLAN (BRANCH, SAMITY_CODE) VALUES ('001', '20');
insert into ATM_PLAN (BRANCH, SAMITY_CODE) VALUES ('002', '20');
I have also developed a form for entry in this table. Multiple users will Insert record in this table. But I want to restrict to the entry, Specific users can entry on specific branch record only.
As example branch 001 can not entry or update on branch 002 records.
I have 100 branches.
Is it Possible ?
thanks in advance
Related
I have a Cassandra table as below
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty float,
PRIMARY KEY (prodid)
) ;
Requirement :
For the provided primary key, if no record exists in table, we need to insert, which is straight forward. but when the record exists for the primary key, then we need to update the qty column by adding the existing value in the table with new values received.
As per my understanding, I need to query the table first for the provided primary key and get the value of the qty column and add with new value received from the request and execute the update query with light weight transaction.
Ex: table has say qty 10 for the prodid=1 and if I receive from user new qty as 2 (which is delta), then I need to update qty as 12 for the prodid=1.
Is that logic is correct? or any better way to design the table or handle the use case? Will this approach introduce latency issue during the load as we need to do select query first and if data exists update the column value with new value ? Please help.
You can change the qty column to static. This way you do not have to update the table but Insert. Updates are resource intensive so cassandra treats UPDATE statement as insert statement. So, your table definition should be -
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty float static,
PRIMARY KEY (prodid) ) ;
So you can use your business logic to calculate the new value of QTY column and use INSERT statement, which intern update the same column.
Other way is to use counter column -
CREATE TABLE inventory(
prodid varchar,
loc varchar,
qty counter,
PRIMARY KEY (prodid, loc ) ) ;
Which this design you can just use update query like below -
update inventory set qty = qty + <calculated Quantity> where prodid = 1;
Notice that, in second table design, all other columns have to the part of primary key. In your case, it is easy and convenient.
I am aware of fact that fields in frozen UDT column is not possible and entire records needs to update , in that case does it imply update on frozen UDT column is not possible and if there is scenario of field update of frozen UDT column , in that case one has to insert new record and delete older one ?
You are correct that you cannot update individual fields of a frozen UDT column but you can update the whole column value. You do not need to delete the previous record. It's fine to update the fields directly. Let me illustrate with an example I created on Astra.
Here is a user-defined type that stores a user's address:
CREATE TYPE address (
number int,
street text,
city text,
zip int
)
and here is the definition for the table of users:
CREATE TABLE users (
name text PRIMARY KEY,
address frozen<address>
)
In this table, there is one user with their address stored as:
cqlsh> SELECT * FROM users ;
name | address
-------+----------------------------------------------------------------
alice | {number: 100, street: 'Main Rd', city: 'Melbourne', zip: 3000}
Let's say that the street number is incorrect. If we try to update just the street number field with:
cqlsh> UPDATE users SET address = {number: 456} WHERE name = 'alice';
We'll end up with an address that only has the street number and nothing else:
cqlsh> SELECT * FROM users ;
name | address
-------+----------------------------------------------------
alice | {number: 456, street: null, city: null, zip: null}
This is because the whole value (not just the street number field) got overwritten by the update. The correct way to update the street number is to explicitly set a value for all the fields of the address with:
cqlsh> UPDATE users SET address = {number: 456, street: 'Main Rd', city: 'Melbourne', zip: 3000} WHERE name = 'alice';
so we end up with:
cqlsh> SELECT * FROM users ;
name | address
-------+----------------------------------------------------------------
alice | {number: 456, street: 'Main Rd', city: 'Melbourne', zip: 3000}
Cheers!
You can update column that is frozen UDT, but you'll need to insert all values for fields inside that UDT. So you can just do normal update of that column only
UPDATE table SET udt_col = new_value WHERE pk = ....
without need to delete something first, etc.
Basically, frozen value is just blob obtained by serializing UDT or collection, and stored as one cell inside row and having the single timestamp. That's different from the non-frozen value, where different pieces of UDT/collection could be stored in different places, and having different timestamps.
I am trying to model a table of content which has a timestamp, ordered by the timestamp. However I want that timestamp to change if a user decides to edit the content, (so that the content reappears at the top of the list).
I know that you can't change a primary key column so I'm at a loss of how something like this would be structured. Below is a sample table.
CREATE TABLE content(
id uuid
category text
last_update_time timestamp
PRIMARY KEY((category, id),last_update_time)) WITH CLUSTERING ORDER BY (last_update_time);
How should I model this table if I want the data to be ordered by a column that can change?
2 solutions
1) If you don't care having update history
CREATE TABLE content(
id uuid
category text
last_update_time timestamp
PRIMARY KEY((category, id))
// Retrieve last update
SELECT * FROM content WHERE category = 'xxx' AND id = yyy;
2) If you want to keep an history of updates
CREATE TABLE content(
id uuid
category text
last_update_time timestamp
PRIMARY KEY((category, id),last_update_time)) WITH CLUSTERING ORDER BY (last_update_time DESC);
// Retrieve last update
SELECT * FROM content WHERE category = 'xxx' AND id = yyy LIMIT 1;
I am writting messaging chat system, similar to FB messaging. I did not find the way, how to effectively store conversation list (each row different user with last sent message most recent on top). If I list conversations from this table:
CREATE TABLE "conversation_list" (
"user_id" int,
"partner_user_id" int,
"last_message_time" time,
"last_message_text" text,
PRIMARY KEY ("user_id", "partner_user_id")
)
I can select from this table conversations for any user_id. When new message is sent, we can simply update the row:
UPDATE conversation_list SET last_message_time = '...', last_message_text='...' WHERE user_id = '...' AND partner_user_id = '...'
But it is sorted by clustering key of course. My question: How to create list of conversations, which is sorted by last_message_time, but partner_user_id will be unique for given user_id?
If last_message_time is clustering key and we delete the row and insert new (to keep partner_user_id unique), I will have many so many thumbstones in the table.
Thank you.
A slight change to your original model should do what you want:
CREATE TABLE conversation_list (
user_id int,
partner_user_id int,
last_message_time timestamp,
last_message_text text,
PRIMARY KEY ((user_id, partner_user_id), last_message_time)
) WITH CLUSTERING ORDER BY (last_message_time DESC);
I combined "user_id" and "partner_user_id" into one partition key. "last_message_time" can be the single clustering column and provide sorting. I reversed the default sort order with the CLUSTERING ORDER BY to make the timestamps descending. Now you should be able to just insert any time there is a message from a user to a partner id.
The select now will give you the ability to look for the last message sent. Like this:
SELECT last_message_time, last_message_text
FROM conversation_list
WHERE user_id= ? AND partner_user_id = ?
LIMIT 1
I have this excel sheet and I want to migrate it to Access (in the near future some other DB manager) And I don't know how to normalize it exactly, I know this might be very opinion base. Currently they use this table for inventory
This is the original Table (sheet)
"TableName: Parts", Fields:"Id_Part", "No_Part", "No_Mold", "No_Lot", "Rev", "Description", "Area", "No_Job", No_Batch,"OrderDate","RecivedDate"
Explanation of problem:
ok the idea is to create a DB that stores all the part numbers the "x" company has, these part numbers have the corresponding field:
1.- Id_Part : is the unique number for each part.
2.- No_Part: Number part of each part that the company uses for there products.
3.- No_Mold: Each Part Number uses a Molding Item, some part numbers use the same Molding Item.
4.- No_Lot: The Lot Number is to keep track of the part numbers in case the client has some issues with the final product. (Its like a tracking number).
5.- Rev: is for Revision control example: A, B or C.
6.- Description: Describes the part number.
7.- Area: name of the department in with the part number is used ( like a type of Part Number).
8.-No_Batch: Its similar to the Lot number, but its an internal number for the company.
9.- Order Date: Date in witch we ordered a part number form a provider.
10.- Received Date: Date when we get that part number from the provider.
This is how I tried to Normalize it
Table1 Name: Parts
Fields: Id_Part, No_Part, Id_Mold, Id_Lot, Id_Rev, Id_Description, Id_Area, Id_job,
Id_Batch, Date_Order, Date_Recived.
Table2 Name: Areas
Fields: Id_Area, Name
Table3: Molds
Fields: Id_Mold, No_Mold, Id_Part
Table4:Jobs
Fields: Id_Job, No_Job
Tablr5:Batchs
Fields: Id_Batch, No_Batch
Table6 Name: descriptions
Fields:Id_Description,Description,Id_Part
Table7 Name:Rev
Fields: Id_Rev,Rev,Id_Part
Any help is useful.
It seems like the PartRevision is the main table here rather than the part. You don't order a Honda Accord, you order a 2013 Honda Accord.
You purchase a PartRevision and it goes into a batch and a lot. You sell a part revision and it pulls from a batch and a lot. Here's how I'd set it up.