subselect as column in a DB2 query - subquery

I made a query where I use a subselect of the same original select, but with a different condition from the original, I can not execute because this error appears:
"A COLUMN OR EXPRESSION IDENTIFIED IN A HAVING CLAUSE IS NOT VALID SQLCODE = -119, SQLSTATE = 42803, DRIVER = 4.22.36 "
How can I correct this query?
SELECT A.COL1,
count(A.COL2) AS YYY,
sum(A.COL3) AS XXX,
(SELECT count(COL1) AS WWW
from SCH.TAB01
where COL4=A.COL4
AND COL5=A.COL5
AND COL1=A.COL1
AND COL4 = date(days(current date) - 1)
AND COL3 > 5
GROUP BY COL1) AS OOO
from SCH.TAB01 as A
where A.COL4 = date(days(current date) - 1)
GROUP BY A.COL1

Remove the GROUP BY COL1 in the subselect
BTW you don’t need date(days(current date) - 1) a simple CURRENT DATE - 1 DAY would suffice.

Related

Compare blank string and null spark sql

I am writing an SQL query that joins two tables. The problem that I am facing is that the column on which I am joining is blank (""," ") on one table and null on the other.
Table A
id
col
1
2
3
SG
Table B
id
col
a
null
b
null
c
SG
source_alleg = spark.sql("""
SELECT A.*,B.COL as COLB FROM TABLEA A LEFT JOIN TABLEB B
ON A.COL = B.COL
""")
For my use case blank values and null are same. I want to do something like Trim(a.col) which will convert blank values to null and hence find all the matches in the join.
Output:
id
col
colb
1
either null or blank
either null or blank
2
either null or blank
either null or blank
3
SG
SG
In sql the NULL are ignored during a join unless you use a outer join or full join
more information : https://www.geeksforgeeks.org/difference-between-left-right-and-full-outer-join/
if you want to convert the nulls to a string you can just use an if
select
if(isnull(trim(col1)),"yourstring", col1),
if(isnull(trim(col2)),"yourstring", col2)
from T;

How to only select the very next string after target string but the next string after the target string, regardless of punctation?

I have a df that looks like this:
id query
1 select * from table1 where col1 = 1
2 select a.columns FROM table2 a
I want to only select the string (table if you know sql) after the string FROM into a new column. FROM can be spelled with different capitalizations (ie From, from,FROM,etc).
How do I select the string directly after the From but not the very next string after the FROM string
I tried:
df['tableName'] = df['query'].str.extract('[^from]*$')
but this is not working. I am not sure if I should make the entire df lowercase right off the bat.
New df should look like this:
id query tableName
1 select * from table1 where col1 = 1 table1
2 select a.columns FROM table2 a table2
Thank you in advance.
You can try
df['tableName'] = df['query'].str.extract('(?i)from ([^ ]*)')
(?i) means ignore case.
print(df)
id query tableName
0 1 select * from table1 where col1 = 1 table1
1 2 select a.columns FROM table2 a table2
This will get you your answer without a regex and should account for all capitalizations types of "table"
df['Table_Name'] = df['query'].apply(lambda x : x.lower().split('from')[1]).apply(lambda x : x.split()[0])

hive How to use conditional statements to execute different query based on result

I have query select col1, col2 from view1 and I wanted execute only when (select columnvalue from table1) > 0 else do nothing.
if (select columnvalue from table1)>0
select col1, col2 from view1"
else
do thing
How can I achieve this in single hive query?
If check query returns scalar value (single row) then you can cross join with check result and filter using > 0 condition:
with check_query as (
select count (*) cnt
from table1
)
select *
from view1 t
cross join check_query c
where c.cnt>0
;

Filling in NULLS with previous records - Netezza SQL

I am using Netezza SQL on Aginity Workbench and have the following data:
id DATE1 DATE2
1 2013-07-27 NULL
2 NULL NULL
3 NULL 2013-08-02
4 2013-09-10 2013-09-23
5 2013-12-11 NULL
6 NULL 2013-12-19
I need to fill in all the NULL values in DATE1 with preceding values in the DATE1 field that are filled in. With DATE2, I need to do the same, but in reverse order. So my desired output would be the following:
id DATE1 DATE2
1 2013-07-27 2013-08-02
2 2013-07-27 2013-08-02
3 2013-07-27 2013-08-02
4 2013-09-10 2013-09-23
5 2013-12-11 2013-12-19
6 2013-12-11 2013-12-19
I only have read access to the data. So creating Tables or views are out of the question
How about this?
select
id
,last_value(date1 ignore nulls) over (
order by id
rows between unbounded preceding and current row
) date1
,first_value(date2 ignore nulls) over (
order by id
rows between current row and unbounded following
) date2
You can manually calculate this as well, rather than relying on the windowing functions.
with chain as (
select
this.*,
prev.date1 prev_date1,
case when prev.date1 is not null then abs(this.id - prev.id) else null end prev_distance,
next.date2 next_date2,
case when next.date2 is not null then abs(this.id - next.id) else null end next_distance
from
Table1 this
left outer join Table1 prev on this.id >= prev.id
left outer join Table1 next on this.id <= next.id
), min_distance as (
select
id,
min(prev_distance) min_prev_distance,
min(next_distance) min_next_distance
from
chain
group by
id
)
select
chain.id,
chain.prev_date1,
chain.next_date2
from
chain
join min_distance on
min_distance.id = chain.id
and chain.prev_distance = min_distance.min_prev_distance
and chain.next_distance = min_distance.min_next_distance
order by chain.id
If you're unable to calculate the distance between IDs by subtraction, just replace the ordering scheme by a row_number() call.
I think Netezza supports the order by clause for max() and min(). So, you can do:
select max(date1) over (order by date1) as date1,
min(date2) over (order by date2 desc) as date2
. . .
EDIT:
In Netezza, you may be able to do this with last_value() and first_value():
select last_value(date1 ignore nulls) over (order by id rows between unbounded preceding and 1 preceding) as date1,
first_value(date1 ignore nulls) over (order by id rows between 1 following and unbounded following) as date2
Netezza doesn't seem to support IGNORE NULLs on LAG(), but it does on these functions.
I've only tested this in Oracle so hopefully it works in Netezza:
Fiddle:
http://www.sqlfiddle.com/#!4/7533f/1/0
select id,
coalesce(date1, t1_date1, t2_date1) as date1,
coalesce(date2, t3_date2, t4_date2) as date2
from (select t.*,
t1.date1 as t1_date1,
t2.date1 as t2_date1,
t3.date2 as t3_date2,
t4.date2 as t4_date2,
row_number() over(partition by t.id order by t.id) as rn
from tbl t
left join tbl t1
on t1.id < t.id
and t1.date1 is not null
left join tbl t2
on t2.id > t.id
and t2.date1 is not null
left join tbl t3
on t3.id < t.id
and t3.date2 is not null
left join tbl t4
on t4.id > t.id
and t4.date2 is not null
order by t.id) x
where rn = 1
Here's a way to fill in NULL dates with the most recent min/max non-null dates using self-joins. This query should work on most databases
select t1.id, max(t2.date1), min(t3.date2)
from tbl t1
join tbl t2 on t1.id >= t2.id
join tbl t3 on t1.id <= t3.id
group by t1.id
http://www.sqlfiddle.com/#!4/acc997/2

Cassandra: Query with where clause containing greather- or lesser-than (< and >)

I'm using Cassandra 1.1.2 I'm trying to convert a RDBMS application to Cassandra. In my RDBMS application I have following table called table1:
| Col1 | Col2 | Col3 | Col4 |
Col1: String (primary key)
Col2: String (primary key)
Col3: Bigint (index)
Col4: Bigint
This table counts over 200 million records. Mostly used query is something like:
Select * from table where col3 < 100 and col3 > 50;
In Cassandra I used following statement to create the table:
create table table1 (primary_key varchar, col1 varchar,
col2 varchar, col3 bigint, col4 bigint, primary key (primary_key));
create index on table1(col3);
I changed the primary key to an extra column (I calculate the key inside my application).
After importing a few records I tried to execute following cql:
select * from table1 where col3 < 100 and col3 > 50;
This result is:
Bad Request: No indexed columns present in by-columns clause with Equal operator
The Query select col1,col2,col3,col4 from table1 where col3 = 67 works
Google said there is no way to execute that kind of queries. Is that right? Any advice how to create such a query?
Cassandra indexes don't actually support sequential access; see http://www.datastax.com/docs/1.1/ddl/indexes for a good quick explanation of where they are useful. But don't despair; the more classical way of using Cassandra (and many other NoSQL systems) is to denormalize, denormalize, denormalize.
It may be a good idea in your case to use the classic bucket-range pattern, which lets you use the recommended RandomPartitioner and keep your rows well distributed around your cluster, while still allowing sequential access to your values. The idea in this case is that you would make a second dynamic columnfamily mapping (bucketed and ordered) col3 values back to the related primary_key values. As an example, if your col3 values range from 0 to 10^9 and are fairly evenly distributed, you might want to put them in 1000 buckets of range 10^6 each (the best level of granularity will depend on the sort of queries you need, the sort of data you have, query round-trip time, etc). Example schema for cql3:
CREATE TABLE indexotron (
rangestart int,
col3val int,
table1key varchar,
PRIMARY KEY (rangestart, col3val, table1key)
);
When inserting into table1, you should insert a corresponding row in indexotron, with rangestart = int(col3val / 1000000). Then when you need to enumerate all rows in table1 with col3 > X, you need to query up to 1000 buckets of indexotron, but all the col3vals within will be sorted. Example query to find all table1.primary_key values for which table1.col3 < 4021:
SELECT * FROM indexotron WHERE rangestart = 0 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 1000 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 2000 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 3000 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 4000 AND col3val < 4021 ORDER BY col3val;
If col3 is always known small values/ranges, you may be able to get away with a simpler table that also maps back to the initial table, ex:
create table table2 (col3val int, table1key varchar,
primary key (col3val, table1key));
and use
insert into table2 (col3val, table1key) values (55, 'foreign_key');
insert into table2 (col3val, table1key) values (55, 'foreign_key3');
select * from table2 where col3val = 51;
select * from table2 where col3val = 52;
...
Or
select * from table2 where col3val in (51, 52, ...);
Maybe OK if you don't have too large of ranges. (you could get the same effect with your secondary index as well, but secondary indexes aren't highly recommended?). Could theoretically parallelize it "locally on the client side" as well.
It seems the "Cassandra way" is to have some key like "userid" and you use that as the first part of "all your queries" so you may need to rethink your data model, then you can have queries like select * from table1 where userid='X' and col3val > 3 and it can work (assuming a clustering key on col3val).

Resources