select column_name::date, count(*) from table_name group by column_name::date
What is the equivalent of this SQL query in Sequelize?
I couldn't find what to do when there is "double colon" in PostgreSQL query.
Thanks to a_horse_with_no_name comment I decide to use;
sequelize.literal("cast(time_column_name as date)")
with the grouping section and the latest code take form;
ModelName.findAndCountAll({
attributes: [
[sequelize.literal("cast(time_column_name as date)"), "time_column_name"],
],
group: sequelize.literal("cast(time_column_name as date)"),
})
So, it gives two SQL query (because of findAndCountAll() function);
SELECT count(*) AS "count"
FROM "table_name"
GROUP BY cast(time_column_name as date);
AND
SELECT cast(time_column_name as date) AS "time_column_name"
FROM "table_name"
GROUP BY cast(time_column_name as date);
Related
In my query, I want to use the sequelize with left join and subquery.
select category.id, category.name, images.tag_id
from table.category
left outer join(select tag_id, category_id,s3_image_url,alt_tag from table.images images.tag_id = 23 and images.name ILIKE '%fresh%' ) as images on category.id = images.category_id
where category.name in ('fruit', 'leaf','tree')
I want to convert this query into sequelize.
After my conversion also it's not working. Is it possible to use left join and subquery on the same sequelize query? if possible please some example
I need to get data on daily/weekly/monthly basis. So i used date_trunc() function to get this type of record. I made psql query but i need to convert it into typeorm code as i'm new to typeorm stack. Below is the query
select date_trunc('day', e."createdAt") as production_to_month, count(id) as count from events e
where e."createdAt" between '2021-05-10' and '2021-05-17' and e."type" = 'LOGIN'
group by date_trunc('day', e."createdAt")
order by date_trunc('day', e."createdAt") asc
need to convert this
You need to use the query builder to achieve the desired result.
The query would be -
return await getRepository(Events)
.createQueryBuilder("e")
.select("date_trunc('day',`e`.`createdAt`)","production_to_month")
.addSelect("count(`id`)","count")
.where("`e`.`createdAt` between '2021-05-10' and '2021-05-17'")
.andWhere("`e`.`type` = 'LOGIN'")
.groupBy("date_trunc('day',`e`.`createdAt`)").
.orderBy("date_trunc('day',`e`.`createdAt`)","ASC")
.printSql()
.getRawMany()
My table schema is:
CREATE TABLE users
(user_id BIGINT PRIMARY KEY,
user_name text,
email_ text);
I inserted below rows into the table.
INSERT INTO users(user_id, email_, user_name)
VALUES(1, 'abc#test.com', 'ABC');
INSERT INTO users(user_id, email_, user_name)
VALUES(2, 'abc#test.com', 'ZYX ABC');
INSERT INTO users(user_id, email_, user_name)
VALUES(3, 'abc#test.com', 'Test ABC');
INSERT INTO users(user_id, email_, user_name)
VALUES(4, 'abc#test.com', 'C ABC');
For searching data into the user_name column, I created an index to use the LIKE operator with '%%':
CREATE CUSTOM INDEX idx_users_user_name ON users (user_name)
USING 'org.apache.cassandra.index.sasi.SASIIndex'
WITH OPTIONS = {
'mode': 'CONTAINS',
'analyzer_class': 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer',
'case_sensitive': 'false'};
Problem:1
When I am executing below Query, it returns 3 records only, instead of 4.
select *
from users
where user_name like '%ABC%';
Problem:2
When I use below query, it gives an error as
ERROR: com.datastax.driver.core.exceptions.InvalidQueryException:
ORDER BY with 2ndary indexes is not supported.
Query =select * from users where user_name like '%ABC%' ORDER BY user_name ASC;
Query:
select *
from users
where user_name like '%ABC%'
ORDER BY user_name ASC;
My requirement is to filter the user_name with order by user_name.
The first query does work correctly for me using cassandra:latest which is now cassandra:3.11.3. You might want to double-check the inserted data (or just recreate from scratch using the cql statements you provided).
The second one gives you enough info - ordering by secondary indexes is not possible in Cassandra. You might have to sort the result set in your application.
That being said I would not recommend running this setup in real apps. With some additional scale (when you have many records) this will be a suicide performance-wise. I should not go into much detail since maybe you already understand this and SO is not a wikia/documentation site, so here is a link.
I have a 100-millions line table, I would like to know how many unique values I have on a CTAC column.
I tried :
SELECT COUNT(*)
FROM ( SELECT CTAC
FROM my_table
GROUP BY CTAC
HAVING COUNT(*) > 1)
but this gives me an error :
sql.AnalysisException : cannot recognize input near '<EOF>' in subquery source
Can we do a subquery in spark ? If so, how ?
Which query should I try to solve my question ?
Try differently as
println(dataFrame.select("CTAC").distinct.count)
I want to create a temporary table in linq query. I have searched for the solution but didn't succeeded.
here "Node" is temporary table,and "Organization.TblOrganizationUnits" is table in my database.
In linq, How can i create a temporary table and how can I perform different joins and union operation of the above query.
my sql query is:
string query=string.Format(#"WITH Node (OrganizationUnitId, UnitName,ParentUnitId)
AS (
SELECT Organization.TblOrganizationUnits.OrganizationUnitId, Organization.TblOrganizationUnits.UnitName , Organization.TblOrganizationUnits.ParentUnitId
FROM Organization.TblOrganizationUnits
WHERE OrganizationUnitId ={0}
UNION ALL
SELECT Organization.TblOrganizationUnits.OrganizationUnitId, Organization.TblOrganizationUnits.UnitName, Organization.TblOrganizationUnits.ParentUnitId
FROM Organization.TblOrganizationUnits
INNER JOIN Node
ON Organization.TblOrganizationUnits.ParentUnitId = Node.OrganizationUnitId
)
SELECT OrganizationUnitId, UnitName,ParentUnitId FROM Node
where OrganizationUnitId not in (SELECT ParentUnitId FROM Node)
option (maxrecursion 0); ", OrganizationUnitId);