how to do multiple insert in WITH clause in Postgres in nodejs - node.js

I am using this Postgres npm package in my node application. https://www.npmjs.com/package/postgres
I created two tables,
create TABLE Person(id SERIAL PRIMARY KEY, name varchar(255), address varchar(255))
create table Product(product_id SERIAL PRIMARY KEY, product_name varchar(255), price int, person_id int, CONSTRAINT fk_ FOREIGN KEY(person_id) REFERENCES person(id))
And I want to insert multiple data using WITH clause, so I am do it like this,
WITH inserted_id AS (
insert into person(name, address) VALUES('Krishna', 'Bangalore, India') RETURNING id)
insert into product(product_name, price, person_id) values('mobile', 125, (select id from inserted_id))
Above queries are working. But I am trying to implement it in my node.js application with multiple insertion, So I tried this
const test = await sql`WITH var_person_id AS (
INSERT INTO person ${sql(
personList,
'name',
'address'
)} RETURNING id
)
INSERT INTO product ${sql(
productList,
'product_name',
'price',
'person_id'
)}`
where,
personList = [{ name: 'Arjun', 'Mumbai-India'},{ name: 'Karn', 'Delhi-India'}]
productList = [{ 'Fan', 220 }, { 'Cycle', 350 }]
As id from person table is generated for each insertion, how to insert that id value in product table person_id column as have multiple insertions?

Related

How do I post data from req.body into a CQL UDT column using the Node.js driver?

I am new to cassandra I need your help.
After creating a collection table using cql console, I am able to create new records and read them, but Post operation using cassandra-driver in nodejs is not working, it only works when I use cql console.
I created table:
CREATE TYPE event_info (
type text,
pagePath text,
ts text,
actionName text
);
CREATE TABLE journey_info_5 (
id uuid PRIMARY KEY,
user_id text,
session_start_ts timestamp,
event FROZEN<event_info>
);
codes for post operation:
export const pushEvent = async(req,res)=>{
const pushEventQuery = 'INSERT INTO user_journey.userjourney (id, user_id, session_start_ts,events)
VALUES ( ${types.TimeUuid.now()}, ${req.body.user_id},${types.TimeUuid.now()},
{ ${req.body.type},${req.body.pagePath},${req.body.ts},${req.body.actionName}} } );'
try {
await client.execute(pushEventQuery)
res.status(201).json("new record added successfully");
} catch (error) {
res.status(404).send({ message: error });
console.log(error);
}
}
it is giving errors, How can I get data from user and post in this collection?
please help me, if any idea
The issue is that your CQL statement is invalid. The format for inserting values in a user-defined type (UDT) column is:
{ fieldname1: 'value1', fieldname2: 'value2', ... }
Note that the column names in your schema don't match up with the CQL statement in your code so I'm reposting the schema here for clarity:
CREATE TYPE community.event_info (
type text,
pagepath text,
ts text,
actionname text
)
CREATE TABLE community.journey_info_5 (
id uuid PRIMARY KEY,
event frozen<event_info>,
session_start_ts timestamp,
user_id text
)
Here's the CQL statement I used to insert a UDT into the table (formatted for readability):
INSERT INTO journey_info_5 (id, user_id, session_start_ts, event)
VALUES (
now(),
'thierry',
totimestamp(now()),
{
type: 'type1',
pagePath: 'pagePath1',
ts: 'ts1',
actionName: 'actionName1'
}
);
For reference, see Inserting or updating data into a UDT column. Cheers!

How to insert new rows to a junction table Postgres

I have a many to many relationship set up with with services and service_categories. Each has a table, and there is a third table to handle to relationship (junction table) called service_service_categories. I have created them like this:
CREATE TABLE services(
service_id SERIAL,
name VARCHAR(255),
summary VARCHAR(255),
profileImage VARCHAR(255),
userAgeGroup VARCHAR(255),
userType TEXT,
additionalNeeds TEXT[],
experience TEXT,
location POINT,
price NUMERIC,
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_categories(
service_category_id SERIAL,
name TEXT,
description VARCHAR(255),
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_service_categories(
service_id INT NOT NULL,
service_category_id INT NOT NULL,
PRIMARY KEY (service_id, service_category_id),
FOREIGN KEY (service_id) REFERENCES services(service_id) ON UPDATE CASCADE,
FOREIGN KEY (service_category_id) REFERENCES service_categories(service_category_id) ON UPDATE CASCADE
);
Now, in my application I would like to add a service_category to a service from a select list for example, at the same time as I create or update a service. In my node js I have this post route set up:
// Create a service
router.post('/', async( req, res) => {
try {
console.log(req.body);
const { name, summary } = req.body;
const newService = await pool.query(
'INSERT INTO services(name,summary) VALUES($1,$2) RETURNING *',
[name, summary]
);
res.json(newService);
} catch (err) {
console.log(err.message);
}
})
How should I change this code to also add a row to the service_service_categories table, when the new service ahas not been created yet, so has no serial number created?
If any one could talk me through the approach for this I would be grateful.
Thanks.
You can do this in the database by adding a trigger to the services table to insert a row into the service_service_categories that fires on row insert. The "NEW" keyword in the trigger function represents the row that was just inserted, so you can access the serial ID value.
https://www.postgresqltutorial.com/postgresql-triggers/
Something like this:
CREATE TRIGGER insert_new_service_trigger
AFTER INSERT
ON services
FOR EACH ROW
EXECUTE PROCEDURE insert_new_service();
Then your trigger function looks something like this (noting that the trigger function needs to be created before the trigger itself):
CREATE OR REPLACE FUNCTION insert_new_service()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
BEGIN
-- check to see if service_id has been created
IF NEW.service_id NOT IN (SELECT service_id FROM service_service_categories) THEN
INSERT INTO service_service_categories(service_id)
VALUES(NEW.service_id);
END IF;
RETURN NEW;
END;
$$;
However in your example data structure, it doesn't seem like there's a good way to link the service_categories.service_category_id serial value to this new row - you may need to change it a bit to accommodate
I managed to get it working to a point with multiple inserts and changing the schema a bit on services table. In the service table I added a column: category_id INT:
ALTER TABLE services
ADD COLUMN category_id INT;
Then in my node query I did this and it worked:
const newService = await pool.query(
`
with ins1 AS
(
INSERT INTO services (name,summary,category_id)
VALUES ($1,$2,$3) RETURNING service_id, category_id
),
ins2 AS
(
INSERT INTO service_service_categories (service_id,service_category_id) SELECT service_id, category_id FROM ins1
)
select * from ins1
`,
[name, summary, category_id]
);
Ideally I want to have multiple categories so the category_id column on service table, would become category_ids INT[]. and it would be an array of ids.
How would I put the second insert into a foreach (interger in the array), so it creates a new service_service_categories row for each id in the array?

How i can define index for a table in node.js (adonis)

Actually i have this table:
ClassBookHistoric
-----------------
Id (Primary Key),
Class_Id (Unique key, foreign key of table Class),
BookUnit_Id (Unique key, foreign key of table BookUnits)
Book_Id
Status
I need to make queries to search data like this: SELECT WHERE Class_Id = (parameter), Book_Id = (parameter) and Status = 1 (active)
I'm studying about indexes and i'm thinking it is necessary to create a index using the columns that i will use to search data (Class_Id, Book_Id and Status) to increase the performance. There's a way to create a index to a group of columns: (Class_Id, Book_Id, Status)? If it's possible, how i can create a group of index in node.js/adonis?
Adonis.js uses knex.js to define columns types and other modifiers as you can see from the docs.
So as an example based on your schema (not fully working just to demonstrate):
'use strict'
const Schema = use('Schema')
class ClassBookHistoric extends Schema {
up () {
this.create('class_books', (table) => {
table.increments()
table.integer('class_id').notNullable().unique()
table.integer('book_unit_Id').notNullable().unique()
table.index(['class_id','book_unit_Id'], 'class_book_index');
table.timestamps()
})
}
down () {
this.drop('class_books')
}
}
module.exports = UsersSchema

How to structure nested arrays with postgresql

I'm making a simple multiplayer game using postgres as a database (and node as the BE if that helps). I made the table users which contains all of the user accounts, and a table equipped, which contains all of the equipped items a user has. users has a one -> many relationship with equipped.
I'm running into the situation where I need the data from both tables structured like so:
[
{
user_id: 1,
user_data...
equipped: [
{ user_id: 1, item_data... },
...
],
},
{
user_id: 2,
user_data...
equipped: [
{ user_id: 2, item_data... },
...
],
},
]
Is there a way to get this data in a single query? Is it a good idea to get it in a single query?
EDIT: Here's my schemas
CREATE TABLE IF NOT EXISTS users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(100) UNIQUE NOT NULL,
password VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL,
created_on TIMESTAMP NOT NULL DEFAULT NOW(),
last_login TIMESTAMP,
authenticated BOOLEAN NOT NULL DEFAULT FALSE,
reset_password_hash UUID
);
CREATE TABLE IF NOT EXISTS equipment (
equipment_id SERIAL PRIMARY KEY NOT NULL,
inventory_id INTEGER NOT NULL REFERENCES inventory (inventory_id) ON DELETE CASCADE,
user_id INTEGER NOT NULL REFERENCES users (user_id) ON DELETE CASCADE,
slot equipment_slot NOT NULL,
created_on TIMESTAMP NOT NULL DEFAULT NOW(),
CONSTRAINT only_one_item_per_slot UNIQUE (user_id, slot)
);
Okay so what I was looking for was closer to postgresql json aggregate, but I didn't know what to look for.
Based on my very limited SQL experience, the "classic" way to handle this would just to do a simple JOIN query on the database like so:
SELECT users.username, equipment.slot, equipment.inventory_id
FROM users
LEFT JOIN equipment ON users.user_id = equipment.user_id;
This is nice and simple, but I would need to merge these tables in my server before sending them off.
Thankfully postgres lets you aggregate rows into a JSON array, which is exactly what I needed (thanks #j-spratt). My final* query looks like:
SELECT users.username,
json_agg(json_build_object('slot', equipment.slot, 'inventory_id', equipment.inventory_id))
FROM users
LEFT JOIN equipment ON users.user_id = equipment.user_id
GROUP BY users.username;
Which returns in exactly the format I was looking for.

Modeling nested data with possible null values in cassandra

I have a situation similar to the one described in question Nested data data modelling in Cassandra?
I have a project entity for which my app needs to be able to display project specific information, including the agencies and vendors participating in the project. The project entity could be described as follows:
{
"id": 7162fe80-1e44-11e4-8c21-0800200c9a66,
"name": "Test Project",
"synopsis": "Lorem Text goes here"
"agencies" : [{
"id": c3e28810-1e44-11e4-8c21-0800200c9a66
"name": "Test Agency"
}],
"vendors": [{
"id": 1c0ba760-1e45-11e4-8c21-0800200c9a66
"name": "Test Vendor"
}]
}
However, sometimes the project might not have any vendors or agencies (or might have one of the entities, but not the other):
{
"id": 7162fe80-1e44-11e4-8c21-0800200c9a66,
"name": "Test Project",
"synopsis": "Lorem Text goes here"
"agencies" : [],
"vendors": []
}
What would be a good way of modeling this data?
I have tried the following schemas but all seem to have issues:
Schema 1:
CREATE TABLE projects (
id uuid,
name text,
synopsis text,
vendor_id uuid,
vendor_name text,
agency_id uuid,
agency_name text
PRIMARY KEY (id, vendor_id, agency_id)
But with this approach, I can't have projects with no vendors or agencies (vendor_id or agency_id cannot be null).
Schema 2:
CREATE TABLE projects (
id uuid,
name text,
synopsis text,
vendor_id uuid,
vendor_name text,
agency_id uuid,
agency_name text
PRIMARY KEY (id)
But with this approach, I can only have one vendor and one agency per project.
I am hesitant to use maps/lists/sets for modeling this data as this seems to be a new feature in CQL 2/3. I am also worried about "data consistency". For example, vendor names frequently change, and I would like projects to reflect the "latest name" of the vendor.
If vendor names frequently change, it's probably not the best idea to de-normalize schema as you described: you have to update all vendor/agency records after each name change.
You can create typical normalized tables for projects, vendors and agencies and do joins in application level:
CREATE TABLE projects (
id uuid,
name text,
vendor_id list<uuid>,
agency_id list<uuid>,
PRIMARY KEY (id));
CREATE TABLE vendors (
id uuid,
name text,
PRIMARY KEY (id));
CREATE TABLE agencies (
id uuid,
name text,
PRIMARY KEY (id));
PS. Not yet released C* 2.1 will have support for user-defined-types, so you can do this:
CREATE TYPE vendor (
id uuid,
name text);
CREATE TYPE agency (
id uuid,
name text);
CREATE TABLE projects (
id uuid,
name text,
vendors list<vendor>,
agencies list<agency>,
PRIMARY KEY (id));

Resources