I have a situation similar to the one described in question Nested data data modelling in Cassandra?
I have a project entity for which my app needs to be able to display project specific information, including the agencies and vendors participating in the project. The project entity could be described as follows:
{
"id": 7162fe80-1e44-11e4-8c21-0800200c9a66,
"name": "Test Project",
"synopsis": "Lorem Text goes here"
"agencies" : [{
"id": c3e28810-1e44-11e4-8c21-0800200c9a66
"name": "Test Agency"
}],
"vendors": [{
"id": 1c0ba760-1e45-11e4-8c21-0800200c9a66
"name": "Test Vendor"
}]
}
However, sometimes the project might not have any vendors or agencies (or might have one of the entities, but not the other):
{
"id": 7162fe80-1e44-11e4-8c21-0800200c9a66,
"name": "Test Project",
"synopsis": "Lorem Text goes here"
"agencies" : [],
"vendors": []
}
What would be a good way of modeling this data?
I have tried the following schemas but all seem to have issues:
Schema 1:
CREATE TABLE projects (
id uuid,
name text,
synopsis text,
vendor_id uuid,
vendor_name text,
agency_id uuid,
agency_name text
PRIMARY KEY (id, vendor_id, agency_id)
But with this approach, I can't have projects with no vendors or agencies (vendor_id or agency_id cannot be null).
Schema 2:
CREATE TABLE projects (
id uuid,
name text,
synopsis text,
vendor_id uuid,
vendor_name text,
agency_id uuid,
agency_name text
PRIMARY KEY (id)
But with this approach, I can only have one vendor and one agency per project.
I am hesitant to use maps/lists/sets for modeling this data as this seems to be a new feature in CQL 2/3. I am also worried about "data consistency". For example, vendor names frequently change, and I would like projects to reflect the "latest name" of the vendor.
If vendor names frequently change, it's probably not the best idea to de-normalize schema as you described: you have to update all vendor/agency records after each name change.
You can create typical normalized tables for projects, vendors and agencies and do joins in application level:
CREATE TABLE projects (
id uuid,
name text,
vendor_id list<uuid>,
agency_id list<uuid>,
PRIMARY KEY (id));
CREATE TABLE vendors (
id uuid,
name text,
PRIMARY KEY (id));
CREATE TABLE agencies (
id uuid,
name text,
PRIMARY KEY (id));
PS. Not yet released C* 2.1 will have support for user-defined-types, so you can do this:
CREATE TYPE vendor (
id uuid,
name text);
CREATE TYPE agency (
id uuid,
name text);
CREATE TABLE projects (
id uuid,
name text,
vendors list<vendor>,
agencies list<agency>,
PRIMARY KEY (id));
Related
I am trying to create a one to many relationship between a user and their expenses.
public void onCreate(SQLiteDatabase db) {
db.execSQL("create table users(user_id INTEGER primary key autoincrement, username TEXT, password TEXT)");
db.execSQL("create table expense(expense_Id INTEGER primary key autoincrement, user_id INTEGER , expenseName, expenseAmount, etDate, reminder, expenseNote, foreign key (user_id) references users(user_id))");
}
This is a picture of my code
I am using this Postgres npm package in my node application. https://www.npmjs.com/package/postgres
I created two tables,
create TABLE Person(id SERIAL PRIMARY KEY, name varchar(255), address varchar(255))
create table Product(product_id SERIAL PRIMARY KEY, product_name varchar(255), price int, person_id int, CONSTRAINT fk_ FOREIGN KEY(person_id) REFERENCES person(id))
And I want to insert multiple data using WITH clause, so I am do it like this,
WITH inserted_id AS (
insert into person(name, address) VALUES('Krishna', 'Bangalore, India') RETURNING id)
insert into product(product_name, price, person_id) values('mobile', 125, (select id from inserted_id))
Above queries are working. But I am trying to implement it in my node.js application with multiple insertion, So I tried this
const test = await sql`WITH var_person_id AS (
INSERT INTO person ${sql(
personList,
'name',
'address'
)} RETURNING id
)
INSERT INTO product ${sql(
productList,
'product_name',
'price',
'person_id'
)}`
where,
personList = [{ name: 'Arjun', 'Mumbai-India'},{ name: 'Karn', 'Delhi-India'}]
productList = [{ 'Fan', 220 }, { 'Cycle', 350 }]
As id from person table is generated for each insertion, how to insert that id value in product table person_id column as have multiple insertions?
I have a many to many relationship set up with with services and service_categories. Each has a table, and there is a third table to handle to relationship (junction table) called service_service_categories. I have created them like this:
CREATE TABLE services(
service_id SERIAL,
name VARCHAR(255),
summary VARCHAR(255),
profileImage VARCHAR(255),
userAgeGroup VARCHAR(255),
userType TEXT,
additionalNeeds TEXT[],
experience TEXT,
location POINT,
price NUMERIC,
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_categories(
service_category_id SERIAL,
name TEXT,
description VARCHAR(255),
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_service_categories(
service_id INT NOT NULL,
service_category_id INT NOT NULL,
PRIMARY KEY (service_id, service_category_id),
FOREIGN KEY (service_id) REFERENCES services(service_id) ON UPDATE CASCADE,
FOREIGN KEY (service_category_id) REFERENCES service_categories(service_category_id) ON UPDATE CASCADE
);
Now, in my application I would like to add a service_category to a service from a select list for example, at the same time as I create or update a service. In my node js I have this post route set up:
// Create a service
router.post('/', async( req, res) => {
try {
console.log(req.body);
const { name, summary } = req.body;
const newService = await pool.query(
'INSERT INTO services(name,summary) VALUES($1,$2) RETURNING *',
[name, summary]
);
res.json(newService);
} catch (err) {
console.log(err.message);
}
})
How should I change this code to also add a row to the service_service_categories table, when the new service ahas not been created yet, so has no serial number created?
If any one could talk me through the approach for this I would be grateful.
Thanks.
You can do this in the database by adding a trigger to the services table to insert a row into the service_service_categories that fires on row insert. The "NEW" keyword in the trigger function represents the row that was just inserted, so you can access the serial ID value.
https://www.postgresqltutorial.com/postgresql-triggers/
Something like this:
CREATE TRIGGER insert_new_service_trigger
AFTER INSERT
ON services
FOR EACH ROW
EXECUTE PROCEDURE insert_new_service();
Then your trigger function looks something like this (noting that the trigger function needs to be created before the trigger itself):
CREATE OR REPLACE FUNCTION insert_new_service()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
BEGIN
-- check to see if service_id has been created
IF NEW.service_id NOT IN (SELECT service_id FROM service_service_categories) THEN
INSERT INTO service_service_categories(service_id)
VALUES(NEW.service_id);
END IF;
RETURN NEW;
END;
$$;
However in your example data structure, it doesn't seem like there's a good way to link the service_categories.service_category_id serial value to this new row - you may need to change it a bit to accommodate
I managed to get it working to a point with multiple inserts and changing the schema a bit on services table. In the service table I added a column: category_id INT:
ALTER TABLE services
ADD COLUMN category_id INT;
Then in my node query I did this and it worked:
const newService = await pool.query(
`
with ins1 AS
(
INSERT INTO services (name,summary,category_id)
VALUES ($1,$2,$3) RETURNING service_id, category_id
),
ins2 AS
(
INSERT INTO service_service_categories (service_id,service_category_id) SELECT service_id, category_id FROM ins1
)
select * from ins1
`,
[name, summary, category_id]
);
Ideally I want to have multiple categories so the category_id column on service table, would become category_ids INT[]. and it would be an array of ids.
How would I put the second insert into a foreach (interger in the array), so it creates a new service_service_categories row for each id in the array?
I'm making a simple multiplayer game using postgres as a database (and node as the BE if that helps). I made the table users which contains all of the user accounts, and a table equipped, which contains all of the equipped items a user has. users has a one -> many relationship with equipped.
I'm running into the situation where I need the data from both tables structured like so:
[
{
user_id: 1,
user_data...
equipped: [
{ user_id: 1, item_data... },
...
],
},
{
user_id: 2,
user_data...
equipped: [
{ user_id: 2, item_data... },
...
],
},
]
Is there a way to get this data in a single query? Is it a good idea to get it in a single query?
EDIT: Here's my schemas
CREATE TABLE IF NOT EXISTS users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(100) UNIQUE NOT NULL,
password VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL,
created_on TIMESTAMP NOT NULL DEFAULT NOW(),
last_login TIMESTAMP,
authenticated BOOLEAN NOT NULL DEFAULT FALSE,
reset_password_hash UUID
);
CREATE TABLE IF NOT EXISTS equipment (
equipment_id SERIAL PRIMARY KEY NOT NULL,
inventory_id INTEGER NOT NULL REFERENCES inventory (inventory_id) ON DELETE CASCADE,
user_id INTEGER NOT NULL REFERENCES users (user_id) ON DELETE CASCADE,
slot equipment_slot NOT NULL,
created_on TIMESTAMP NOT NULL DEFAULT NOW(),
CONSTRAINT only_one_item_per_slot UNIQUE (user_id, slot)
);
Okay so what I was looking for was closer to postgresql json aggregate, but I didn't know what to look for.
Based on my very limited SQL experience, the "classic" way to handle this would just to do a simple JOIN query on the database like so:
SELECT users.username, equipment.slot, equipment.inventory_id
FROM users
LEFT JOIN equipment ON users.user_id = equipment.user_id;
This is nice and simple, but I would need to merge these tables in my server before sending them off.
Thankfully postgres lets you aggregate rows into a JSON array, which is exactly what I needed (thanks #j-spratt). My final* query looks like:
SELECT users.username,
json_agg(json_build_object('slot', equipment.slot, 'inventory_id', equipment.inventory_id))
FROM users
LEFT JOIN equipment ON users.user_id = equipment.user_id
GROUP BY users.username;
Which returns in exactly the format I was looking for.
Here is my model ,
var traders = sequelize.define('traders', {
.....
}, {});
it has many to many self association
traders.belongsToMany(models.traders,{
as:'feedbackClient',
through:'feedback'
});
idea is one trader can give feedback to other trader on each successful trade.
but when i sync it generates table with this SQL query
Executing (default): CREATE TABLE IF NOT EXISTS "feedbacks" ("id" S`ERIAL , "rating" "public"."enum_feedbacks_rating", "comment" VARCHAR(255), "traderId" INTEGER REFERENCES "traders" ("id") ON DELETE CASCADE ON UPDATE CASCADE, "feedbackClientId" INTEGER REFERENCES "traders" ("id") ON DELETE CASCADE ON UPDATE CASCADE, "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, UNIQUE ("traderId", "feedbackClientId"), PRIMARY KEY ("id"));`
how can i remove this constraint?
UNIQUE ("traderId", "feedbackClientId")
so that i can add multiple records with same combination of traderId and feedbackClientId.
Got a solution here, please post your answers if you have better solutions.