How can this PostgreSQL query:
CREATE INDEX idx_lu_suggest_street_query_street ON fr_suggest_street_query (lower(f_unaccent(street))) INCLUDE (street);
be written in sqlalchemy? So far I tried:
Index(
"idx_suggest_street_street",
sa.func.lower(sa.func.f_unaccent("street")).label("street"),
postgresql_ops={
"street": "text_pattern_ops",
},
)
But I am missing the INCLUDE part, how can I achieve this?
UPDATE:
I achieved the INCLUDE part using
postgresql_include=["street"],
Still when I am running:
SELECT
indexname,
indexdef
FROM
pg_indexes
WHERE
tablename = 'lu_suggest_street_query';
The index that is created using sqlalchemy looks like:
CREATE INDEX idx_suggest_street_street_text_pattern ON public.lu_suggest_street_query USING btree (lower(f_unaccent('street'::text))) INCLUDE (street)
But it should be:
CREATE INDEX idx_suggest_street_street_text_pattern ON public.lu_suggest_street_query USING btree (lower(f_unaccent((street)::text))) INCLUDE (street)
I need to mention that I am using sqlalchemy declarative metadata.
Related
I am trying to run a simple SQL Query using Airflow provider Snowflake (1.3.0)
SnowflakeOperator(
task_id=f'task',
snowflake_conn_id='snowflake_conn',
parameters={
"table": "dummy_table",
},
sql=["delete from %(table)s"],
autocommit=True,
dag=dag,
)
The SQL it is rendering is delete from ''dummy''. I want to get rid of '' but have tried everything and nothing seems to be working
To parametrize table name IDENFITIER should be used:
To use an object name specified in a literal or variable, use IDENTIFIER().
sql=["delete from IDENTIFIER(%(table)s)"],
The query DELETE FROM 'dummy' is not correct, but DELETE FROM IDENTIFIER('dummy') will work.
CREATE TABLE dummy(id INT);
DELETE FROM 'dummy';
-- Syntax error: unexpected ''dummy''. (line 4)
DELETE FROM IDENTIFIER('dummy');
-- number of rows deleted: 0
If you are using parameters then it's up to SQLAlchemy. You can find more information about it in How to render a .sql file with parameters in MySqlOperator in Airflow?
Alternatively, you can use Airflow rendering (Jinja engine) with params:
SnowflakeOperator(
task_id=f'task',
snowflake_conn_id='snowflake_conn',
params={
"table": "dummy_table",
},
sql=["delete from {{ params.table }}"],
autocommit=True,
dag=dag,
)
This will be rendered as:
thus the query that will be submitted to Snowflake is:
delete from dummy_table
I'm new to node-postgres and trying to pass in parameters connected to a table alias, but it's not working. How can I parameterize the simple query below? Take something like
const pgQuery = 'SELECT t1.YEAR as year, t1.CODE as code FROM data t1;';
and instead do something like
const pgQuery = 'SELECT t1.$1 as year, t1.$2 as code FROM data t1;';
const values = ['YEAR', 'CODE'];
Per the docs here:
https://node-postgres.com/features/queries
PostgreSQL does not support parameters for identifiers. If you need to have dynamic database, schema, table, or column names (e.g. in DDL statements) use pg-format package for handling escaping these values to ensure you do not have SQL injection!
Which takes you here:
https://www.npmjs.com/package/pg-format
Repo:
https://github.com/datalanche/node-pg-format
I want to store data in following structure :-
"id" : 100, -- primary key
"data" : [
{
"imei" : 862304021502870,
"details" : [
{
"start" : "2018-07-24 12:34:50",
"end" : "2018-07-24 12:44:34"
},
{
"start" : "2018-07-24 12:54:50",
"end" : "2018-07-24 12:56:34"
}
]
}
]
So how do I create table schema in Cassandra for the same ?
Thanks in advance.
There are several approaches to this, depending on the requirements regarding data access/modification - for example, do you need to modify individual fields, or you update at once:
Declare the map of imei/details as user-defined type (UDT), and then declare table like this:
create table tbl (
id int primary key,
data set<frozen<details_udt>>);
But this is relatively hard to support in the long term, especially if you add more nested objects with different types. Plus, you can't really update fields of the frozen records that you must to use in case of nested collections/UDTs - for this table structure you need to replace complete record inside set.
Another approach - just do explicit serialization/deserialization of data into/from JSON or other format, and have table structure like this:
create table tbl(
id int primary key,
data text);
the type of data field depends on what format you'll use - you can use blob as well to store binary data. But in this case you'll need to update/fetch complete field. You can simplify things if you use Java driver's custom codecs that will take care for conversion between your data structure in Java & desired format. See example in the documentation for conversion to/from JSON.
I have an Object that maps column names to values. The columns to be updated are not known beforehand and are decided at run-time.
e.g. map = {col1: "value1", col2: "value2"}.
I want to execute an UPDATE query, updating a table with those columns to the corresponding values. Can I do the following? If not, is there an elegant way of doing it without building the query manually?
db.none('UPDATE mytable SET $1 WHERE id = 99', map)
is there an elegant way of doing it without building the query manually?
Yes, there is, by using the helpers for SQL generation.
You can pre-declare a static object like this:
const cs = new pgp.helpers.ColumnSet(['col1', 'col2'], {table: 'mytable'});
And then use it like this, via helpers.update:
const sql = pgp.helpers.update(data, cs) + /* WHERE clause with the condition */;
// and then execute it:
db.none(sql).then(data => {}).catch(error => {})
This approach will work with both a single object and an array of objects, and you will just append the update condition accordingly.
See also: PostgreSQL multi-row updates in Node.js
What if the column names are not known beforehand?
For that see: Dynamic named parameters in pg-promise, and note that a proper answer would depend on how you intend to cast types of such columns.
Something like this :
map = {col1: "value1", col2: "value2",id:"existingId"}.
db.none("UPDATE mytable SET col1=${col1}, col2=${col2} where id=${id}", map)
I searched a lot about sorting elements by sum of votes (in another model), like I do in SQL here :
SELECT item.* FROM item
LEFT JOIN (
SELECT
vote.item,
SUM(vote.value) AS vote.rating
FROM vote
GROUP BY vote.item
) AS res ON item.id = vote.item
ORDER BY res.rating DESC
Is there a way to do it via waterline methods ?
I think you can't do the left join with simple waterline methods, but you can use the .query method to execute your raw SQL syntax.
Sails MySQL adapter makes sum('field') conflict with sort('field'). It will generate SQL query like:
SELECT SUM(table.field) AS field FROM table ORDER BY table.field;
But I want:
SELECT SUM(table.field) AS field FROM table ORDER BY field;
It same as:
SELECT SUM(table.field) AS f FROM table ORDER BY f;
My solution is using lodash.sortBy() to process results. https://lodash.com/docs/4.16.4#sortBy