Assuming I have a master entity (let's have it be a user) and a detail entity (let's go with adresses), how can I filter the master entity over values from its details?
Going with the user/address example, let's say we want to find all users who have an address in Berlin. In SQL, the query would probably be something like this:
SELECT * FROM user AS u
WHERE EXISTS(
SELECT 0 FROM address AS a
WHERE a.user_id = u.id
AND a.city = 'Berlin'
)
To make this query in SQL, one has to use an alias (user AS u) and use it later in the a.user_id = u.id part. Is it possible to do something similar in diesel?
EDIT: I'm using diesel 2.0, rust 1.64 and here's the relevant section from schema.rs:
diesel::table! {
addresses (id) {
id -> Int4,
user_id -> Int4,
city -> Nullable<Varchar>,
}
}
diesel::table! {
users (id) {
id -> Int4,
name -> Varchar,
}
}
diesel::joinable!(addresses -> users (user_id));
diesel::allow_tables_to_appear_in_same_query!(
addresses,
users,
);
For this specific query the usage of an alias is not required as each table appears exactly once in the query. Generally speaking: diesel does provide an alias! macro, which allows to define table aliases for later use.
As for the corresponding query: Such queries are more or less literally translated to diesel dsl using the provided functions:
let sub_query = addresses::table
.select(0.into_sql::<diesel::sql_types::Integer>())
.filter(addresses::user_id.eq(users::id))
.filter(addresses::city.eq("Berlin"));
let result = users::table.filter(diesel::dsl::exists(sub_query)).load::<CompatibleType>(&mut conn)?;
Related
I have a many to many relationship set up with with services and service_categories. Each has a table, and there is a third table to handle to relationship (junction table) called service_service_categories. I have created them like this:
CREATE TABLE services(
service_id SERIAL,
name VARCHAR(255),
summary VARCHAR(255),
profileImage VARCHAR(255),
userAgeGroup VARCHAR(255),
userType TEXT,
additionalNeeds TEXT[],
experience TEXT,
location POINT,
price NUMERIC,
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_categories(
service_category_id SERIAL,
name TEXT,
description VARCHAR(255),
PRIMARY KEY (id),
UNIQUE (name)
);
CREATE TABLE service_service_categories(
service_id INT NOT NULL,
service_category_id INT NOT NULL,
PRIMARY KEY (service_id, service_category_id),
FOREIGN KEY (service_id) REFERENCES services(service_id) ON UPDATE CASCADE,
FOREIGN KEY (service_category_id) REFERENCES service_categories(service_category_id) ON UPDATE CASCADE
);
Now, in my application I would like to add a service_category to a service from a select list for example, at the same time as I create or update a service. In my node js I have this post route set up:
// Create a service
router.post('/', async( req, res) => {
try {
console.log(req.body);
const { name, summary } = req.body;
const newService = await pool.query(
'INSERT INTO services(name,summary) VALUES($1,$2) RETURNING *',
[name, summary]
);
res.json(newService);
} catch (err) {
console.log(err.message);
}
})
How should I change this code to also add a row to the service_service_categories table, when the new service ahas not been created yet, so has no serial number created?
If any one could talk me through the approach for this I would be grateful.
Thanks.
You can do this in the database by adding a trigger to the services table to insert a row into the service_service_categories that fires on row insert. The "NEW" keyword in the trigger function represents the row that was just inserted, so you can access the serial ID value.
https://www.postgresqltutorial.com/postgresql-triggers/
Something like this:
CREATE TRIGGER insert_new_service_trigger
AFTER INSERT
ON services
FOR EACH ROW
EXECUTE PROCEDURE insert_new_service();
Then your trigger function looks something like this (noting that the trigger function needs to be created before the trigger itself):
CREATE OR REPLACE FUNCTION insert_new_service()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
BEGIN
-- check to see if service_id has been created
IF NEW.service_id NOT IN (SELECT service_id FROM service_service_categories) THEN
INSERT INTO service_service_categories(service_id)
VALUES(NEW.service_id);
END IF;
RETURN NEW;
END;
$$;
However in your example data structure, it doesn't seem like there's a good way to link the service_categories.service_category_id serial value to this new row - you may need to change it a bit to accommodate
I managed to get it working to a point with multiple inserts and changing the schema a bit on services table. In the service table I added a column: category_id INT:
ALTER TABLE services
ADD COLUMN category_id INT;
Then in my node query I did this and it worked:
const newService = await pool.query(
`
with ins1 AS
(
INSERT INTO services (name,summary,category_id)
VALUES ($1,$2,$3) RETURNING service_id, category_id
),
ins2 AS
(
INSERT INTO service_service_categories (service_id,service_category_id) SELECT service_id, category_id FROM ins1
)
select * from ins1
`,
[name, summary, category_id]
);
Ideally I want to have multiple categories so the category_id column on service table, would become category_ids INT[]. and it would be an array of ids.
How would I put the second insert into a foreach (interger in the array), so it creates a new service_service_categories row for each id in the array?
Actually i have this table:
ClassBookHistoric
-----------------
Id (Primary Key),
Class_Id (Unique key, foreign key of table Class),
BookUnit_Id (Unique key, foreign key of table BookUnits)
Book_Id
Status
I need to make queries to search data like this: SELECT WHERE Class_Id = (parameter), Book_Id = (parameter) and Status = 1 (active)
I'm studying about indexes and i'm thinking it is necessary to create a index using the columns that i will use to search data (Class_Id, Book_Id and Status) to increase the performance. There's a way to create a index to a group of columns: (Class_Id, Book_Id, Status)? If it's possible, how i can create a group of index in node.js/adonis?
Adonis.js uses knex.js to define columns types and other modifiers as you can see from the docs.
So as an example based on your schema (not fully working just to demonstrate):
'use strict'
const Schema = use('Schema')
class ClassBookHistoric extends Schema {
up () {
this.create('class_books', (table) => {
table.increments()
table.integer('class_id').notNullable().unique()
table.integer('book_unit_Id').notNullable().unique()
table.index(['class_id','book_unit_Id'], 'class_book_index');
table.timestamps()
})
}
down () {
this.drop('class_books')
}
}
module.exports = UsersSchema
If I try to get all users with a certain role like this:
_db.Select<UserAuthCustom>(x => x.Roles.Contains("Bloggers"));
Then it works as expected and returnsthe users.
If I try to do it with the query builder like so:
var q = _db.From<UserAuthCustom>()
.Where(x => x.Roles.Contains("Bloggers"))
.Limit(1);
Then it throws an exception because it thinks "bloggers" is a column and has translated this into into something like WHERE bloggers IN (null).
Is it possible to do something like LIKE '%\"Blogger\"%' on the blobbed field?
You can't use typed queries against blobbed columns like the Roles collection which are blobbed in the table using the configured complex type serializer which defaults to JSV format for all RDBMS's except for PostgreSQL which uses JSON.
If you want to perform server-side queries on the Roles collection I'd recommend persisting them in distinct tables:
container.Register<IAuthRepository>(c =>
new OrmLiteAuthRepository<UserAuthCustom, UserAuthDetails>(c.Resolve<IDbConnectionFactory>()) {
UseDistinctRoleTables = true
});
That way you can use a standard join query to select all users in a specific role:
var q = db.From<UserAuthCustom>()
.Join<UserAuthRole>((u,r) => r.UserAuthId = u.Id)
.Where<UserAuthRole>(x => x.Role == "Bloggers");
Alternatively you would need to create a Custom SQL query to query against the blobbed Roles column as a string, e.g:
q.Where("Roles LIKE #role", new { role = "%Blogger%" });
Or using typed column names in Custom SQL Expressions:
q.Where(q.Column<UserAuthCustom>(x => x.Roles) + " LIKE #role,
new { role = "%Blogger%" });
I am using the nodejs pg package. I have created some simple parameterized queries using the following format:
var client = new Client({user: 'brianc', database: 'test'});
client.on('drain', client.end.bind(client)); //disconnect client when all queries are finished
client.connect();
var query = client.query({
text: 'SELECT name FROM users WHERE email = $1',
values: ['brianc#example.com']
}, function(err, result) {
console.log(result.rows[0].name) // output: brianc
});
But now I have some more complex queries to write where I am creating a copy of a record with a new name and description like the following:
var sNewName = 'new name', sNewDescription = 'new description';
INSERT INTO testtable (
name,
description,
col3name,
col4name,
col5name,
) (
SELECT
sNewName,
sNewDescription,
col3name,
col4name,
col5name
FROM testtable
WHERE
id = 24
) RETURNING *;
On the pg wiki they say the following regarding Parameterized Queries:
A parameterized query allows you "pass arguments" to a query, providing a barrier to SQL injection attacks.
Parameters may not be DDL:
select name from emp where emp_id=$1 – legal
select $1 from emp where emp_id=$2 – illegal – column cannot be parameter
select name from $1 where emp_id=$2 – illegal – table cannot be parameter
select name from $1.emp where emp_id=$2 – illegal – schema cannot be parameter
How then, is it possible to do the above query for copying a record, as a parameterized query?
I am using postgresql 9.5.3 and pg 6.1.2.
Thank your for your time.
I am still learning Neo4j and using the browser console with REST transactions to perform queries. I have a question on how to accomplish a particular task. Given the following scenario how would one go about completing the following:
I have 3 users in the database
2 users are connected with a relationship :Met label.
The 3rd user does not have any relationship connections
I want to be able to create a Cypher query to do the following:
IFF a :Met relationship exists between the user with whom we are making the query context and the desired user, return all of the properties for the desired user.
If no relationship exists between the user with whom we are making the query context and the desired user, only return back a public subset of data (avatar, first name, etc.)
Is there a way to execute a single query which can check if this relationship connection exists, return all User information? And if not, return only a subset of properties?
Thanks all!
In this query, p1 is the "query context", and p2 is all other people. A result row will only have the bar property if p1 and p2 have met.
MATCH (p1:Person { name: 'Fred' }),(p2:Person)
USING INDEX p1:Person(name)
WHERE p1 <> p2
RETURN
CASE WHEN (p1)-[:Met]-(p2)
THEN { name: p2.name, foo: p2.foo, bar: p2.bar }
ELSE { name: p2.name, foo: p2.foo }
END AS result;
For efficiency, this query assumes that you have first created an index on :Person(name).
You could do something like this whereby you match the first person and then optionally match the second person connected to the first via the :MET relationship. If the relationship exists then the results set you return could have more sensitive data in it.
match (p1:Person {name: '1'})
with p1
optional match (p1)-[r:MET]->(p2:Person {name: '2'})
with p1, case when r is not null then
{ name: p1.name, birthday: p1.birthday }
else
{ name: p1.name }
end as data
return data
EDIT:
OR maybe this is a better fit instead. Match both users and if a relationship exists return more data for the second person.
match (p1:Person {name: '1'}), (p2:Person {name: '2'})
with p1, p2
optional match (p1)-[r:MET]->(p2)
with p2, case when r is not null then
{ name: p2.name, birthday: p2.birthday }
else
{ name: p2.name }
end as data
return data