Get all rows if keyword is null else return matching - node.js

I have a plpgsql function in Postgres. It's working fine when keyword is not null and returning the matching results but when keyword is null I want to ignore it and return arbitrary rows.
CREATE OR REPLACE FUNCTION get_all_companies(_keyword varchar(255))
RETURNS TABLE(
id INTEGER,
name VARCHAR,
isactive boolean
) AS $$
BEGIN
RETURN Query
SELECT c.id, c.name, c.isactive FROM companydetail AS c
WHERE c.name ~* _keyword LIMIT 50 ;
END;$$
LANGUAGE plpgsql;

Verify if it's NULL or it's empty:
RETURN QUERY
SELECT c.id, c.name, c.isactive
FROM companydetail AS c
WHERE _keyword IS NULL
OR _keyword = ''::varchar(255)
OR c.name ~* _keyword
LIMIT 50 ;

#jahuuar provided a simple and elegant solution with a single SELECT to solve this with a single query (also skipping empty strings if you need that). You don't need plpgsql or even a function for this.
While working with plpgsql, you can optimize performance:
CREATE OR REPLACE FUNCTION get_all_companies(_keyword varchar(255))
RETURNS TABLE(id INTEGER, name VARCHAR, isactive boolean) AS
$func$
BEGIN
IF _keyword <> '' THEN -- exclude null and empty string
RETURN QUERY
SELECT c.id, c.name, c.isactive
FROM companydetail AS c
WHERE c.name ~* _keyword
LIMIT 50;
ELSE
RETURN QUERY
SELECT c.id, c.name, c.isactive
FROM companydetail AS c
LIMIT 50;
END IF;
END
$func$ LANGUAGE plpgsql;
Postgres can use separate, optimized plans for the two distinct queries this way. A trigram GIN index scan for the first query (you need the matching index, of course - see links below), a sequential scan for the second. And PL/pgSQL saves query plans when executed repeatedly in the same session.
Related:
Best way to check for "empty or null value"
Difference between LIKE and ~ in Postgres
PostgreSQL LIKE query performance variations
Difference between language sql and language plpgsql in PostgreSQL functions

Related

MssqlRow to json string without knowing structure and data type on compile time [duplicate]

Using PostgreSQL I can have multiple rows of json objects.
select (select ROW_TO_JSON(_) from (select c.name, c.age) as _) as jsonresult from employee as c
This gives me this result:
{"age":65,"name":"NAME"}
{"age":21,"name":"SURNAME"}
But in SqlServer when I use the FOR JSON AUTO clause it gives me an array of json objects instead of multiple rows.
select c.name, c.age from customer c FOR JSON AUTO
[{"age":65,"name":"NAME"},{"age":21,"name":"SURNAME"}]
How to get the same result format in SqlServer ?
By constructing separate JSON in each individual row:
SELECT (SELECT [age], [name] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
FROM customer
There is an alternative form that doesn't require you to know the table structure (but likely has worse performance because it may generate a large intermediate JSON):
SELECT [value] FROM OPENJSON(
(SELECT * FROM customer FOR JSON PATH)
)
no structure better performance
SELECT c.id, jdata.*
FROM customer c
cross apply
(SELECT * FROM customer jc where jc.id = c.id FOR JSON PATH , WITHOUT_ARRAY_WRAPPER) jdata (jdata)
Same as Barak Yellin but more lazy:
1-Create this proc
CREATE PROC PRC_SELECT_JSON(#TBL VARCHAR(100), #COLS VARCHAR(1000)='D.*') AS BEGIN
EXEC('
SELECT X.O FROM ' + #TBL + ' D
CROSS APPLY (
SELECT ' + #COLS + '
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
) X (O)
')
END
2-Can use either all columns or specific columns:
CREATE TABLE #TEST ( X INT, Y VARCHAR(10), Z DATE )
INSERT #TEST VALUES (123, 'TEST1', GETDATE())
INSERT #TEST VALUES (124, 'TEST2', GETDATE())
EXEC PRC_SELECT_JSON #TEST
EXEC PRC_SELECT_JSON #TEST, 'X, Y'
If you're using PHP add SET NOCOUNT ON; in the first row (why?).

Is it possible to chain subsequent queries's where clauses in Dapper based on the results of a previous query in the same connection?

Is it possible to use .QueryMultiple (or some other method) in Dapper, and use the results of each former query to be used in the where clause of the next query, without having to do each query individually, get the id, and then .Query again, get the id and so on.
For example,
string sqlString = #"select tableA_id from tableA where tableA_lastname = #lastname;
select tableB_id from tableB WHERE tableB_id = tableA_id";
db.QueryMultiple.(sqlString, new {lastname = "smith"});
Is something like this possible with Dapper or do I need a view or stored procedure to accomplish this? I can use multiple joins for one SQL statement, but in my real query there are 7 joins, and I didn't think I should return 7 objects.
Right now I'm just using object.
You can store every previous query in table parameter and then first perform select from the parameter and query for next, for example:
DECLARE #TableA AS Table(
tableA_id INT
-- ... all other columns you need..
)
INSERT #TableA
SELECT tableA_id
FROM tableA
WHERE tableA_lastname = #lastname
SELECT *
FROM #TableA
SELECT tableB_id
FROM tableB
JOIN tableA ON tableB_id = tableA_id

SQLAlchemy: Referencing labels in SELECT subqueries

I'm trying to figure out how to replicate the below query in SQLAlchemy
SELECT c.company_id AS company_id,
(SELECT policy_id FROM associative_table at WHERE at.company_id = c.company_id) AS policy_id_ref,
(SELECT `default` FROM policy p WHERE p.policy_id = policy_id_ref) AS `default`,
FROM company c;
Note that this is a stripped down, basic example of what I'm really dealing with. The actual schema supports data and relationship versioning that requires the subqueries to include additional conditions, sorting, and limiting, making it impractical (if not impossible) for them to be joins.
The crux of the problem is in how the second subquery relies on policy_id_ref -- the value obtained from the first subquery. In SQLAlchemy, this is effectively what I have now:
ct = aliased(classes.company)
at = aliased(classes.associative_table)
pt = aliased(classes.policy)
policy_id_ref = session.query(at.policy_id).\
filter(at.company_id == ct.company_id).\
label('policy_id_ref')
policy_default = session.query(pt.default).\
filter(pt.id == 'policy_id_ref').\
label('default')
query = session.query(ct.company_id,policy_id_ref,policy_default)
The pull from the "company" table works fine as does the first subquery that retrieves the "policy_id_ref" column. The problem is the second subquery that has to reference that "policy_id_ref" column. I don't know how to write its filter in such a way that it literally renders "policy_id_ref" in the resulting query, to match the label of the first subquery.
Suggestions?
Thanks in advance
You can write your query as
select(
Companies.company_id,
AssociativeTable.policy_id.label('policy_id_ref'),
Policy.default.label('policy_default'),
).select_from(
Companies,
).join(
AssociativeTable,
AssociativeTable.company_id == Companies.company_id,
).join(
Policy,
AssociativeTable.policy_id == Policy.id
)
but in case you need reference to label from subquery => use literal_column
from sqlalchemy import func, select, literal_column
session.query(
func.array_agg(
literal_column('batch_info'),
JSONB
).label('history')
).select_from(
select(
func.jsonb_build_object(
'batch_id', AccountingQueueBatch.id,
'batch_label', AccountingQueueBatch.label,
).label('batch_info')
).select_from(
AccountingQueueBatch,
)
)

Does Cassandra allow user defined function in where clause?

I created user defined function fStringToDouble which takes string as an argument and returns double. This user defined functions works fine in select statement.
SELECT applieddatetime, fStringToDouble(variablevalue) from my_table WHERE locationid='xyz' and applieddatetime >= '2016-08-22' AND applieddatetime < '2016-08-23' ;
When I put this user defined function in where clause , I get syntax error as "no viable alternative at input"
SELECT applieddatetime from my_table WHERE locationid='xyz' and applieddatetime >= '2016-08-22' AND applieddatetime < '2016-08-23'and fStringToDouble(variablevalue)<6.0;
What is wrong with above query ? Is there any built in function to cast String to Double in Cassandra?
You cannot use user defined function in WHERE clauses but only some range query operators.
If you want to know more about what you can do in WHERE clauses, you can have a look at this post: http://www.datastax.com/dev/blog/a-deep-look-to-the-cql-where-clause

How to optimize DELETE .. NOT IN .. SUBQUERY in Firebird

I've this kind of delete query:
DELETE
FROM SLAVE_TABLE
WHERE ITEM_ID NOT IN (SELECT ITEM_ID FROM MASTER_TABLE)
Are there any way to optimize this?
You can use EXECUTE BLOCK for sequential scanning of detail table and deleting records where no master record is matched.
EXECUTE BLOCK
AS
DECLARE VARIABLE C CURSOR FOR
(SELECT d.id
FROM detail d LEFT JOIN master m
ON d.master_id = m.id
WHERE m.id IS NULL);
DECLARE VARIABLE I INTEGER;
BEGIN
OPEN C;
WHILE (1 = 1) DO
BEGIN
FETCH C INTO :I;
IF(ROW_COUNT = 0)THEN
LEAVE;
DELETE FROM detail WHERE id = :I;
END
CLOSE C;
END
(NOT) IN can usually be optimized by using (NOT) EXISTS instead.
DELETE
FROM SLAVE_TABLE
WHERE NOT EXISTS (SELECT 1 FROM MASTER_TABLE M WHERE M.ITEM_ID = ITEM_ID)
I am not sure what you are trying to do here, but to me this query indicates that you should be using foreign keys to enforce these kind of constraints, not run queries to cleanup the mess afterwards.

Resources