I can't get sqlite3 insert to work on nodejs - node.js

I can't get the following code to work. The SQL statement works when I test it with the sqlite binaries but trying to run it via the nodejs sqlite3 library always result in the following error. Can someone who have used the library before please help me?
[Error: SQLITE_RANGE: column index out of range
Emitted 'error' event on Statement instance at:
] {
errno: 25,
code: 'SQLITE_RANGE'
}
db.serialize(() => {
db.run("CREATE TABLE IF NOT EXISTS account(id INTEGER PRIMARY KEY, firstname TEXT, lastname TEXT, password TEXT, email TEXT UNIQUE)");
db.run("INSERT INTO account(firstname, lastname, password, email) VALUES(#firstname, #lastname, #password, #email)", {firstname, lastname, password, email});
response.send('Successfully registered account');
response.end();
});

Since you are not passing the primary key in the INSERT clause, you either need to update the primary key to auto-increment, or pass it into the INSERT clause.
db.run("CREATE TABLE IF NOT EXISTS account(id INTEGER PRIMARY KEY AUTO_INCREMENT, firstname TEXT, lastname TEXT, password TEXT, email TEXT UNIQUE)");

Related

How do I post data from req.body into a CQL UDT column using the Node.js driver?

I am new to cassandra I need your help.
After creating a collection table using cql console, I am able to create new records and read them, but Post operation using cassandra-driver in nodejs is not working, it only works when I use cql console.
I created table:
CREATE TYPE event_info (
type text,
pagePath text,
ts text,
actionName text
);
CREATE TABLE journey_info_5 (
id uuid PRIMARY KEY,
user_id text,
session_start_ts timestamp,
event FROZEN<event_info>
);
codes for post operation:
export const pushEvent = async(req,res)=>{
const pushEventQuery = 'INSERT INTO user_journey.userjourney (id, user_id, session_start_ts,events)
VALUES ( ${types.TimeUuid.now()}, ${req.body.user_id},${types.TimeUuid.now()},
{ ${req.body.type},${req.body.pagePath},${req.body.ts},${req.body.actionName}} } );'
try {
await client.execute(pushEventQuery)
res.status(201).json("new record added successfully");
} catch (error) {
res.status(404).send({ message: error });
console.log(error);
}
}
it is giving errors, How can I get data from user and post in this collection?
please help me, if any idea
The issue is that your CQL statement is invalid. The format for inserting values in a user-defined type (UDT) column is:
{ fieldname1: 'value1', fieldname2: 'value2', ... }
Note that the column names in your schema don't match up with the CQL statement in your code so I'm reposting the schema here for clarity:
CREATE TYPE community.event_info (
type text,
pagepath text,
ts text,
actionname text
)
CREATE TABLE community.journey_info_5 (
id uuid PRIMARY KEY,
event frozen<event_info>,
session_start_ts timestamp,
user_id text
)
Here's the CQL statement I used to insert a UDT into the table (formatted for readability):
INSERT INTO journey_info_5 (id, user_id, session_start_ts, event)
VALUES (
now(),
'thierry',
totimestamp(now()),
{
type: 'type1',
pagePath: 'pagePath1',
ts: 'ts1',
actionName: 'actionName1'
}
);
For reference, see Inserting or updating data into a UDT column. Cheers!

How to insert new rows to a junction table Postgres

I have a many to many relationship set up with with services and service_categories. Each has a table, and there is a third table to handle to relationship (junction table) called service_service_categories. I have created them like this:
CREATE TABLE services(
service_id SERIAL,
name VARCHAR(255),
summary VARCHAR(255),
profileImage VARCHAR(255),
userAgeGroup VARCHAR(255),
userType TEXT,
additionalNeeds TEXT[],
experience TEXT,
location POINT,
price NUMERIC,
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_categories(
service_category_id SERIAL,
name TEXT,
description VARCHAR(255),
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_service_categories(
service_id INT NOT NULL,
service_category_id INT NOT NULL,
PRIMARY KEY (service_id, service_category_id),
FOREIGN KEY (service_id) REFERENCES services(service_id) ON UPDATE CASCADE,
FOREIGN KEY (service_category_id) REFERENCES service_categories(service_category_id) ON UPDATE CASCADE
);
Now, in my application I would like to add a service_category to a service from a select list for example, at the same time as I create or update a service. In my node js I have this post route set up:
// Create a service
router.post('/', async( req, res) => {
try {
console.log(req.body);
const { name, summary } = req.body;
const newService = await pool.query(
'INSERT INTO services(name,summary) VALUES($1,$2) RETURNING *',
[name, summary]
);
res.json(newService);
} catch (err) {
console.log(err.message);
}
})
How should I change this code to also add a row to the service_service_categories table, when the new service ahas not been created yet, so has no serial number created?
If any one could talk me through the approach for this I would be grateful.
Thanks.
You can do this in the database by adding a trigger to the services table to insert a row into the service_service_categories that fires on row insert. The "NEW" keyword in the trigger function represents the row that was just inserted, so you can access the serial ID value.
https://www.postgresqltutorial.com/postgresql-triggers/
Something like this:
CREATE TRIGGER insert_new_service_trigger
AFTER INSERT
ON services
FOR EACH ROW
EXECUTE PROCEDURE insert_new_service();
Then your trigger function looks something like this (noting that the trigger function needs to be created before the trigger itself):
CREATE OR REPLACE FUNCTION insert_new_service()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
BEGIN
-- check to see if service_id has been created
IF NEW.service_id NOT IN (SELECT service_id FROM service_service_categories) THEN
INSERT INTO service_service_categories(service_id)
VALUES(NEW.service_id);
END IF;
RETURN NEW;
END;
$$;
However in your example data structure, it doesn't seem like there's a good way to link the service_categories.service_category_id serial value to this new row - you may need to change it a bit to accommodate
I managed to get it working to a point with multiple inserts and changing the schema a bit on services table. In the service table I added a column: category_id INT:
ALTER TABLE services
ADD COLUMN category_id INT;
Then in my node query I did this and it worked:
const newService = await pool.query(
`
with ins1 AS
(
INSERT INTO services (name,summary,category_id)
VALUES ($1,$2,$3) RETURNING service_id, category_id
),
ins2 AS
(
INSERT INTO service_service_categories (service_id,service_category_id) SELECT service_id, category_id FROM ins1
)
select * from ins1
`,
[name, summary, category_id]
);
Ideally I want to have multiple categories so the category_id column on service table, would become category_ids INT[]. and it would be an array of ids.
How would I put the second insert into a foreach (interger in the array), so it creates a new service_service_categories row for each id in the array?

Queries from NodeJS to PostgreSQL DB doesn't properly show UTF8 characters

I'm working on a project, and my mother tongue is spanish and inside my database I'm using characters such as "ñ" and "é". When I'm using the psql shell those characters properly show on the terminal, but when I make a query using node-postgress those characters doesn't show up, instead I get ¤ or ¢.
In my database I have both client_encoding and server_encoding to UTF8, and I even checked with node-postgress, using a query, to see if they also were set to UTF8 and they didn't change for some other reason.
My database connection it's set up like this
const { Pool } = require("pg");
const db = new Pool({
user: user,
password: password,
host: localhost,
port: 5432,
database: my_database,
});
module.exports = db;
And the code for my query is like this:
const router = require("express").Router(),
db = require("../database/db");
//GET A PLACE ROUTE
router.get("/", async (req, res) => {
try {
const place = await db.query("SELECT * FROM places");
console.log(place.rows[0].name);
res.status(200).json({
status: "success",
data: {
place_name: place.rows[0].name,
},
});
} catch (error) {
console.error(error.message);
res.status(500).send("Error del servidor");
}
});
And now, if the name of the place is for example "Salón de peñas", how it will show up both in the console.log and my json response will be like "Sal¢n de pe¤as".
At first, I thought that the problem was because I didn't correctly set up my json response charset, but then I sent these characters as a response, and they show up correctly. The problem is when these characters come from my database. I checked the database encoding (both the client and the server) and they're set to UTF8, and like I said before, these characters display correctly when I'm using the psql shell.
I'm basically having the exact same problem as this question that didn't get an answer
I think I found a workaround to this problem which may be hard to explain but I'll try my best.
So, I realized like for example a new user register and they have ñ or ó in their name, inside the database it shows as ├▒ and ├│. BUT if I do a query from my server and send a json response, "├▒" "├│" characters dissapear and "ñ" "ó" shows instead.
With this weird behavior I thought about inserting all my places through my backend into the database, instead of inserting it using the psql shell. That would be really annoying because I will need to create a route to insert data into my places table, and then do a post request with postman for every row that I need to insert, and then delete that post route because I don't really need it.
After that I realised that maybe using the \i command (the command to execute a .sql file) from the psql shell may cause a similar behavior like inserting data from the server with a post request. And it did! so now, I have a database.sql file with this inside:
CREATE DATABASE my_database;
\c my_database;
SET client_encoding = 'UTF8';
create extension if not exists "uuid-ossp";
CREATE TABLE users(
user_id UUID DEFAULT uuid_generate_v4(),
name VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL UNIQUE,
password VARCHAR(255) NOT NULL,
PRIMARY KEY (user_id)
);
CREATE TABLE places(
place_id SERIAL,
name VARCHAR(50) NOT NULL,
address VARCHAR(50) NOT NULL,
PRIMARY KEY(place_id)
);
INSERT INTO places(name, address) VALUES ('Salón de peñas', 'Calle 242');
INSERT INTO places(name, address) VALUES ('another place', 'another adress');
INSERT INTO places(name, address) VALUES ('another place', 'another adress');
INSERT INTO places(name, address) VALUES ('another place', 'another adress');
Now, if I need to add another row into my places table, I need to create a .sql file and execute it to add a row. Which may be annoying but it's just only needed when there are special characters in the row

better-sqlite3 SqliteError: NOT NULL constraint failed

I'm trying to get my username and password hash into my SQLite database, but I always get the error message: SqliteError: NOT NULL constraint failed: Users.id
This error indicates that my query is not matching my Users table, but I can't find the issue.
My dump for Users table:
CREATE TABLE IF NOT EXISTS "Users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"username" varchar(50) NOT NULL,
"hash" varchar(60) NOT NULL
);
My Node code:
var Database = require('better-sqlite3')
var db = new Database('db.sqlite')
var sql = 'INSERT INTO Users (username, hash) VALUES (?,?)'
db.prepare(sql).run(username, hash)
The id field is set to be an autoincrementing integer and the primary key, therefore it should set itself automatically for every new entry.
The error message says that the NOT NULL contraint on Users.id failed, but id doesn't even have the NOT NULL constraint defined.
If I instead use the command
INSERT INTO Users (username, hash) VALUES ('testname','testhash1234');
directly in the SQLite3 database it all works fine.
Am I doing something wrong or is this a bug in better-sqlite3?
How can I get around this problem?

instagram timeline data model in cassandra

i want design timeline (Home) like instagram, but most sampled like "twissandra-j" used bellow schema:
-- Users user is following
CREATE TABLE following (
username text,
followed text,
PRIMARY KEY(username, followed)
);
-- Users who follow user
CREATE TABLE followers (
username text,
following text,
PRIMARY KEY(username, following)
);
-- Materialized view of tweets created by user
CREATE TABLE userline (
tweetid timeuuid,
username text,
body text,
PRIMARY KEY(username, tweetid)
);
-- Materialized view of tweets created by user, and users she follows
CREATE TABLE timeline (
username text,
tweetid timeuuid,
posted_by text,
body text,
PRIMARY KEY(username, tweetid)
);
in this design, every new post inserted, for each follower inserted a new record to timeline. if a user has 10k follower and 1000 users worked with application, program fails, Is there a better way?
// Insert the tweet into follower timelines
for (String follower : getFollowers(username)) {
execute("INSERT INTO timeline (username, tweetid, posted_by, body) VALUES ('%s', %s, '%s', '%s')",
follower,
id.toString(),
username,
body);
I guess, one of those 2 solutions/suggestions could help :
1)- 1st suggestion, insert into TIMELINE in a Batch mode of 1000 inserts statements for example.
execute("
BEGIN BATCH
INSERT INTO timeline (username, tweetid, posted_by, body) VALUES ('%s', %s, '%s', '%s')", follower, id.toString(), username, body);
INSERT INTO timeline (username, tweetid, posted_by, body) VALUES ('%s', %s, '%s', '%s')", follower, id.toString(), username, body);
INSERT INTO timeline (username, tweetid, posted_by, body) VALUES ('%s', %s, '%s', '%s')", follower, id.toString(), username, body);
...
// n statements
APPLY BATCH");
Batching multiple statements saves network exchanges between the client/server and server coordinator/replicas.
One more thing, batches are atomic by default (in Cassandra 1.2 and later). In the context of a Cassandra batch operation, atomic means that if any of the batch succeeds, all of it will, otherwise none.
2)- 2nd suggestion, achieve insert into TIMELINE in an asynchronous mode (with success callback function in the front-end):
And of course, maybe you can combine both of them.

Resources