better-sqlite3 SqliteError: NOT NULL constraint failed - node.js

I'm trying to get my username and password hash into my SQLite database, but I always get the error message: SqliteError: NOT NULL constraint failed: Users.id
This error indicates that my query is not matching my Users table, but I can't find the issue.
My dump for Users table:
CREATE TABLE IF NOT EXISTS "Users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"username" varchar(50) NOT NULL,
"hash" varchar(60) NOT NULL
);
My Node code:
var Database = require('better-sqlite3')
var db = new Database('db.sqlite')
var sql = 'INSERT INTO Users (username, hash) VALUES (?,?)'
db.prepare(sql).run(username, hash)
The id field is set to be an autoincrementing integer and the primary key, therefore it should set itself automatically for every new entry.
The error message says that the NOT NULL contraint on Users.id failed, but id doesn't even have the NOT NULL constraint defined.
If I instead use the command
INSERT INTO Users (username, hash) VALUES ('testname','testhash1234');
directly in the SQLite3 database it all works fine.
Am I doing something wrong or is this a bug in better-sqlite3?
How can I get around this problem?

Related

I can't get sqlite3 insert to work on nodejs

I can't get the following code to work. The SQL statement works when I test it with the sqlite binaries but trying to run it via the nodejs sqlite3 library always result in the following error. Can someone who have used the library before please help me?
[Error: SQLITE_RANGE: column index out of range
Emitted 'error' event on Statement instance at:
] {
errno: 25,
code: 'SQLITE_RANGE'
}
db.serialize(() => {
db.run("CREATE TABLE IF NOT EXISTS account(id INTEGER PRIMARY KEY, firstname TEXT, lastname TEXT, password TEXT, email TEXT UNIQUE)");
db.run("INSERT INTO account(firstname, lastname, password, email) VALUES(#firstname, #lastname, #password, #email)", {firstname, lastname, password, email});
response.send('Successfully registered account');
response.end();
});
Since you are not passing the primary key in the INSERT clause, you either need to update the primary key to auto-increment, or pass it into the INSERT clause.
db.run("CREATE TABLE IF NOT EXISTS account(id INTEGER PRIMARY KEY AUTO_INCREMENT, firstname TEXT, lastname TEXT, password TEXT, email TEXT UNIQUE)");

How to insert new rows to a junction table Postgres

I have a many to many relationship set up with with services and service_categories. Each has a table, and there is a third table to handle to relationship (junction table) called service_service_categories. I have created them like this:
CREATE TABLE services(
service_id SERIAL,
name VARCHAR(255),
summary VARCHAR(255),
profileImage VARCHAR(255),
userAgeGroup VARCHAR(255),
userType TEXT,
additionalNeeds TEXT[],
experience TEXT,
location POINT,
price NUMERIC,
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_categories(
service_category_id SERIAL,
name TEXT,
description VARCHAR(255),
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_service_categories(
service_id INT NOT NULL,
service_category_id INT NOT NULL,
PRIMARY KEY (service_id, service_category_id),
FOREIGN KEY (service_id) REFERENCES services(service_id) ON UPDATE CASCADE,
FOREIGN KEY (service_category_id) REFERENCES service_categories(service_category_id) ON UPDATE CASCADE
);
Now, in my application I would like to add a service_category to a service from a select list for example, at the same time as I create or update a service. In my node js I have this post route set up:
// Create a service
router.post('/', async( req, res) => {
try {
console.log(req.body);
const { name, summary } = req.body;
const newService = await pool.query(
'INSERT INTO services(name,summary) VALUES($1,$2) RETURNING *',
[name, summary]
);
res.json(newService);
} catch (err) {
console.log(err.message);
}
})
How should I change this code to also add a row to the service_service_categories table, when the new service ahas not been created yet, so has no serial number created?
If any one could talk me through the approach for this I would be grateful.
Thanks.
You can do this in the database by adding a trigger to the services table to insert a row into the service_service_categories that fires on row insert. The "NEW" keyword in the trigger function represents the row that was just inserted, so you can access the serial ID value.
https://www.postgresqltutorial.com/postgresql-triggers/
Something like this:
CREATE TRIGGER insert_new_service_trigger
AFTER INSERT
ON services
FOR EACH ROW
EXECUTE PROCEDURE insert_new_service();
Then your trigger function looks something like this (noting that the trigger function needs to be created before the trigger itself):
CREATE OR REPLACE FUNCTION insert_new_service()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
BEGIN
-- check to see if service_id has been created
IF NEW.service_id NOT IN (SELECT service_id FROM service_service_categories) THEN
INSERT INTO service_service_categories(service_id)
VALUES(NEW.service_id);
END IF;
RETURN NEW;
END;
$$;
However in your example data structure, it doesn't seem like there's a good way to link the service_categories.service_category_id serial value to this new row - you may need to change it a bit to accommodate
I managed to get it working to a point with multiple inserts and changing the schema a bit on services table. In the service table I added a column: category_id INT:
ALTER TABLE services
ADD COLUMN category_id INT;
Then in my node query I did this and it worked:
const newService = await pool.query(
`
with ins1 AS
(
INSERT INTO services (name,summary,category_id)
VALUES ($1,$2,$3) RETURNING service_id, category_id
),
ins2 AS
(
INSERT INTO service_service_categories (service_id,service_category_id) SELECT service_id, category_id FROM ins1
)
select * from ins1
`,
[name, summary, category_id]
);
Ideally I want to have multiple categories so the category_id column on service table, would become category_ids INT[]. and it would be an array of ids.
How would I put the second insert into a foreach (interger in the array), so it creates a new service_service_categories row for each id in the array?

Remove all items in table with Prisma2 and Jest

I would like to know how can I remove all items in table with Prisma2 and Jest ?
I read the CRUD documentation and I try with this :
user.test.js
....
import { PrismaClient } from "#prisma/client"
beforeEach(async () => {
const prisma = new PrismaClient()
await prisma.user.deleteMany({})
})
...
But I have an error :
Invalid `prisma.user.deleteMany()` invocation:
The change you are trying to make would violate the required relation 'PostToUser' between the `Post` and `User` models.
My Database
CREATE TABLE User (
id INTEGER PRIMARY KEY AUTO_INCREMENT NOT NULL,
name VARCHAR(255),
email VARCHAR(255) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL
);
CREATE TABLE Post (
id INTEGER PRIMARY KEY AUTO_INCREMENT NOT NULL,
title VARCHAR(255) NOT NULL,
createdAt TIMESTAMP NOT NULL DEFAULT now(),
content TEXT,
published BOOLEAN NOT NULL DEFAULT false,
fk_user_id INTEGER NOT NULL,
CONSTRAINT `fk_user_id` FOREIGN KEY (fk_user_id) REFERENCES User(id) ON DELETE CASCADE
);
schema.prisma
model Post {
content String?
createdAt DateTime #default(now())
fk_user_id Int
id Int #default(autoincrement()) #id
published Boolean #default(false)
title String
author User #relation(fields: [fk_user_id], references: [id])
##index([fk_user_id], name: "fk_user_id")
}
model User {
email String #unique
id Int #default(autoincrement()) #id
name String?
password String #default("")
Post Post[]
Profile Profile?
}
You are violating the foreign key constraint between Post and User.
You can not remove a User before deleting its Posts
beforeEach(async () => {
const prisma = new PrismaClient()
await prisma.post.deleteMany({where: {...}}) //delete posts first
await prisma.user.deleteMany({})
})
Or set CASCADE deletion on the foreign key,
this way when you delete a User its posts will be automatically deleted
This is another way to do it, this would remove all rows and its dependant rows, also would reset the ids. This way you can iterate over all the tables and order doesn't matter.
prisma.$executeRaw(`TRUNCATE TABLE ${table} RESTART IDENTITY CASCADE;`)

How to structure nested arrays with postgresql

I'm making a simple multiplayer game using postgres as a database (and node as the BE if that helps). I made the table users which contains all of the user accounts, and a table equipped, which contains all of the equipped items a user has. users has a one -> many relationship with equipped.
I'm running into the situation where I need the data from both tables structured like so:
[
{
user_id: 1,
user_data...
equipped: [
{ user_id: 1, item_data... },
...
],
},
{
user_id: 2,
user_data...
equipped: [
{ user_id: 2, item_data... },
...
],
},
]
Is there a way to get this data in a single query? Is it a good idea to get it in a single query?
EDIT: Here's my schemas
CREATE TABLE IF NOT EXISTS users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(100) UNIQUE NOT NULL,
password VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL,
created_on TIMESTAMP NOT NULL DEFAULT NOW(),
last_login TIMESTAMP,
authenticated BOOLEAN NOT NULL DEFAULT FALSE,
reset_password_hash UUID
);
CREATE TABLE IF NOT EXISTS equipment (
equipment_id SERIAL PRIMARY KEY NOT NULL,
inventory_id INTEGER NOT NULL REFERENCES inventory (inventory_id) ON DELETE CASCADE,
user_id INTEGER NOT NULL REFERENCES users (user_id) ON DELETE CASCADE,
slot equipment_slot NOT NULL,
created_on TIMESTAMP NOT NULL DEFAULT NOW(),
CONSTRAINT only_one_item_per_slot UNIQUE (user_id, slot)
);
Okay so what I was looking for was closer to postgresql json aggregate, but I didn't know what to look for.
Based on my very limited SQL experience, the "classic" way to handle this would just to do a simple JOIN query on the database like so:
SELECT users.username, equipment.slot, equipment.inventory_id
FROM users
LEFT JOIN equipment ON users.user_id = equipment.user_id;
This is nice and simple, but I would need to merge these tables in my server before sending them off.
Thankfully postgres lets you aggregate rows into a JSON array, which is exactly what I needed (thanks #j-spratt). My final* query looks like:
SELECT users.username,
json_agg(json_build_object('slot', equipment.slot, 'inventory_id', equipment.inventory_id))
FROM users
LEFT JOIN equipment ON users.user_id = equipment.user_id
GROUP BY users.username;
Which returns in exactly the format I was looking for.

How will i know that record was duplicate or it was inserted successfully?

Here is my CQL table:
CREATE TABLE user_login (
userName varchar PRIMARY KEY,
userId uuid,
fullName varchar,
password text,
blocked boolean
);
I have this datastax java driver code
PreparedStatement prepareStmt= instances.getCqlSession().prepare("INSERT INTO "+ AppConstants.KEYSPACE+".user_info(userId, userName, fullName, bizzCateg, userType, blocked) VALUES(?, ?, ?, ?, ?, ?);");
batch.add(prepareStmt.bind(userId, userData.getEmail(), userData.getName(), userData.getBizzCategory(), userData.getUserType(), false));
PreparedStatement pstmtUserLogin = instances.getCqlSession().prepare("INSERT INTO "+ AppConstants.KEYSPACE+".user_login(userName, userId, fullName, password, blocked) VALUES(?, ?, ?, ?, ?) IF NOT EXIST");
batch.add(pstmtUserLogin.bind(userData.getEmail(), userId, userData.getName(), passwordEncoder.encode(userData.getPwd()), false));
instances.getCqlSession().executeAsync(batch);
Here the problem is that if I remove IF NOT EXIST all work fine but if put it back it simply do not insert records in table nor throw any error.
So how will i know that i am inserting duplicate userName ?
I am using cassandra 2.0.1
Use INSERT... IF NOT EXISTS, then you can use ResultSet#wasApplied() to check the outcome:
ResultSet rs = session.execute("insert into user (name) values ('foo') if not exists");
System.out.println(rs.wasApplied());
Notes:
this CQL query is a lightweight transaction, that carries performance implications. See this article for more information.
your example only has one statement, you don't need a batch
Looks like you need an ACID transaction and Cassandra, simply put, is not ACID. You have absolutely no guarantee that in the interval you check if username exists it will not be created from someone else.
Besides that, in CQL standard INSERT and UPDATE do the same thing. They both write a "new" record marking the old ones deleted. If there are old records or not is not important.
IF you want to authenticate or create a new user on the fly, I suppose you can work on a composite key username + password, and to your query as update where username =datum AND password = datum.
IN this way, if the user gives you a wrong password your query fails.
If user is new, he cant give a "wrong" password, and so his account is created.
You can now test for a field like "alreadysubscribed" which you only set after the first login, so in case of a "just created" user will be missing

Resources