Nested subquery in FOR ALL ENTRIES - subquery

Consultant sent me this code example, here is something he expects to get
SELECT m1~vbeln_im m1~vbelp_im m1~mblnr smbln
INTO CORRESPONDING FIELDS OF TABLE lt_mseg
FROM mseg AS m1
INNER JOIN mseg AS m2 ON m1~mblnr = m2~smbln
AND m1~mjahr = m2~sjahr
AND m1~zeile = m2~smblp
FOR ALL ENTRIES IN lt_vbfa
WHERE
AND m2~bwart = '102'
AND 0 = ( select SUM( ( CASE
when SHKZG = 'S' THEN 1
when SHKZG = 'H' THEN -1
else 0
END ) *MENGE ) MENGE
into lt_mseg-summ
from mseg
where
VBELN_IM = m1~vbeln_im
and VBELP_IM = m1~vbelp_im
).
The problem is I don't see how that should work in current syntax. I think about deriving internal select and using it as condition to main one, but is there a proper way to write this nested construction?
As i get it, if nested statement = 0, then main query executes. The problem here is the case inside nested statement. Is it even possible in ABAP? And in my opinion this check could be used outside from main SQL query.
Any suggestions are welcome.

the logic that you were given is part of Native/Open SQL and has some shortcomings that you need to be aware of.
the statement you are showing has to be placed between EXEC SQL and ENDEXEC.
the logic is platform dependent.
there is no syntax checking performed between the EXEC and ENDEXEC
the execution of this bypasses the database buffering process, so its slower
To me, I would investigate a better way to capture the data that performs better outside of open/native sql.
If you want to move forward with this type of logic, below are a couple of links which should be helpful. There is an example select using a nested select with a case statement.
Test Program
Example Logic

This is probably what you need, it works at least since ABAP 750.
SELECT vbeln UP TO 100 ROWS
FROM vbfa
INTO TABLE #DATA(lt_vbfa).
DATA(rt_vbeln) = VALUE range_vbeln_va_tab( FOR GROUPS val OF <line> IN lt_vbfa GROUP BY ( low = <line>-vbeln ) WITHOUT MEMBERS ( sign = 'I' option = 'EQ' low = val-low ) ).
SELECT m1~vbeln_im, m1~vbelp_im, m1~mblnr, m2~smbln
INTO TABLE #DATA(lt_mseg)
FROM mseg AS m1
JOIN mseg AS m2
ON m1~mblnr = m2~smbln
AND m1~mjahr = m2~sjahr
AND m1~zeile = m2~smblp
WHERE m2~bwart = '102'
AND m1~vbeln_im IN ( SELECT vbelv FROM vbfa WHERE vbelv IN #rt_vbeln )
GROUP BY m1~vbeln_im, m1~vbelp_im, m1~mblnr, m2~smbln
HAVING SUM( CASE m1~shkzg WHEN 'H' THEN 1 WHEN 'S' THEN -1 ELSE 0 END * m1~menge ) = 0.
Yes, aggregating and FOR ALL ENTRIES is impossible in one SELECT, but you can trick the system with range and subquery. Also you don't need three joins for summarizing reversed docs, your SUM subquery is redundant here.
If you need to select documents not only by delivery number but also by position this will be more complicated for sure.

Related

Cosmos DB paginated query with custom order by clause

I want to do a select query in Cosmos DB that returns a maximum number of results (say 50) and then gives me the continuation token so I can continue the search where I left off.
Now let's say my query has 2 equality conditions in my where clause, e.g.
where prop1 = "a" and prop2 = "w" and prop3 = "g"
In the results that are returned, I want the records that satisfy prop1 = "a" to appear first, followed by the results that have prop2 = "w" followed by the ones with prop3 = "g".
Why do I need it? Because while I could just get all the data to my application and sort it there, I can't pull all records obviously as that would mean pulling in too much data. So if I can't order it this way in cosmos itself, in the results that I get, I might only have those records that don't have prop1 = "a" at all. Now I could keep retrying this till I get the ones with prop1 = "a" (I need this because I want to show the results with prop1 = "a" as the first set of results to the user) but I might have to pull like a 100 times to get the first record since I have a huge dataset sitting in my Cosmos DB.
How can I handle this scenario in Cosmos? Thanks!
So if I am understanding your question correctly, you want to accomplish this:
SELECT * FROM c
WHERE
c.prop1 = 'a'
AND
c.prop2 = 'b'
AND
c.prop3 = 'c'
ORDER BY
c.prop1, c.prop2, c.prop3
OFFSET 0 LIMIT 25
Now, luckily you can now do this in CosmosDB SQL. But, there is a caveat. You have to set up a composite index in your collection to allow for this.
So, for this collection, my composite index would look like this:
Now, if I wanted to change it to this:
SELECT * FROM c
WHERE
c.prop1 = 'a'
AND
c.prop2 = 'b'
AND
c.prop3 = 'c'
ORDER BY
c.prop1 DESC, c.prop2, c.prop3
OFFSET 0 LIMIT 25
I could add another composite index to cover that use-case. You can see in your settings it's an array of arrays so you can add as many combinations as you'd like.
This should get you to where you need to be if I understood your question correctly.

How to add a condition in "ON" cause of "LEFT JOIN" in PonyORM

I want to run the following query in PonyORM.
SELECT af.AppFormID, af.AppFormTitle, ra.CreateGrant, ra.ReadGrant, ra.UpdateGrant, ra.DeleteGrant, ra.PrintGrant
FROM public.appforms as af left join public.roleaccesses as ra
on af.appformid = ra.appformid and ra.roleid = 2
if you see the last part of the code, I added a condition in "ON" cause.
I tried to write the following code in python.
query= orm.left_join((af.AppFormID, af.AppFormTitle, ra.CreateGrant, ra.ReadGrant, ra.UpdateGrant, ra.DeleteGrant, ra.PrintGrant) for af in AppForms for ra in af.RoleAccess if ra.RoleID.RoleID == id)
But, "if" is known as "WHERE" cause. How can I solve this problem?
Thank for any help.
Current version of PonyORM (0.7.6) does not support additional conditions in LEFT JOIN clause. But I agree that this is a useful feature.
Please add a new issue here: https://github.com/ponyorm/pony/issues
I do not guarantee that it will be implemented quickly, but eventually we will add it.
Until this, you need to write a raw SQL query:
role_id = 2
rows = db.select("""
SELECT af.AppFormID, af.AppFormTitle,
ra.CreateGrant, ra.ReadGrant,
ra.UpdateGrant, ra.DeleteGrant, ra.PrintGrant
FROM public.appforms as af
LEFT JOIN public.roleaccesses as ra
ON af.appformid = ra.appformid and ra.roleid = $role_id
""")

Kentico Repeater with Custom Query

OK Here we go.
Using Kentico 11/Portal Engine (no hot fixes)
Have a table that holds Content only page Types. One field of importance is a Date and time field.
I am trying to get rows out of this table that match a certain month and year criteria. For instance give me all records where Month=2 and Year=2018. These argument will be passed via the query string
I have a custom Stored proc that I would like to receive two int(or string) arguments then return a collection of all matching rows.
I am using a RepeaterWithCustomQuery to call the procedure and handle the resulting rows. As you can see below the querystring arguments are named "year" and "monthnumber".
The Query
Me.PR.PREDetailSelect
When my Webpart is set up in this configuration I get the following error:
In my Query, I have tried:
EXEC Proc_Custom_PRDetails #MonthNumber = ##ORDERBY##; #Year = ##WHERE##<br/>
EXEC Proc_Custom_PRDetails #MonthNumber = ##ORDERBY##, #Year = ##WHERE##<br/>
EXEC Proc_Custom_PRDetails #MonthNumber = ##ORDERBY## #Year = ##WHERE##<br/>
Any help would be appreciated (Thanks in advance Brendan). Lastly, don't get too caught up in the names of specific objects as I tried to change names to protect the innocent.
Those macros for queries are not meant to be used with stor procs. The system generates this false condition 1=1 in case if you don't pass anything so it won't break the sql statement like the one below:
SELECT ##TOPN## ##COLUMNS##
FROM View_CMS_Tree_Joined AS V
INNER JOIN CONTENT_MenuItem AS C
ON V.DocumentForeignKeyValue = C.MenuItemID AND V.ClassName = N'CMS.MenuItem'
WHERE ##WHERE##
ORDER BY ##ORDERBY##
You need to convert you stor proc to SQL statement then you can use these SQL macros or use stor proc without parameters
If look at the query above top and where are not good because system will do adjustment, but you can use order by and columns, but they both must be present (I think it passes them as is):
exec proc_test ##ORDERBY##, ##COLUMNS##
Honestly I would advice against doing this, plus you won't gain much by calling stor proc.

Automapper Project().To Error A query body must end with a select

I'm trying to prevent that a query to an entity bring more columns than necessary. Should only bring those columns specified in the target model.
Below is my code built following some examples to achieve my goal but I get syntax error "A query body must end with a select clause or a group clause linq”
int the query line.
var studentEventsModel = from c in DbContext.StudentEvent.Project().To<StudentEventViewModel>();
Please let me know what I’m doing wrong.
public IEnumerable<StudentEventViewModel> GetStudentEventsListViewModel()
{
Mapper.CreateMap<StudentEvent, StudentEventViewModel>();
var studentEventsModel = from c in DbContext.StudentEvent.Project().To<StudentEventViewModel>();
return studentEventsModel;
}
As #hometoast mentioned you may add select at the end of your query like this:
var studentEventsModel =
from c in DbContext.StudentEvent.Project().To<StudentEventViewModel>() select c;
or alerternatively you may use the lambda expression like this:
var studentEventsModel = DbContext.StudentEvent.Project().To<StudentEventViewModel>();
And as to the question on why you are seeing that error, it is due to the fact that a query syntax must end with select or group, mentioned here in the documentation.
A query expression must begin with a from clause and must end with a
select or group clause. Between the first from clause and the last
select or group clause, it can contain one or more of these optional
clauses: where, orderby, join, let and even additional from clauses.
You can also use the into keyword to enable the result of a join or
group clause to serve as the source for additional query clauses in
the same query expression.

Speeding up SQL Query with CASE in JOIN

working on a DB2 database using SQL in VBA to make some custom reports. I came up with the below query, but it took like 45-minutes to run with 80,000 results... the table BILLPRC probably has twice that in total before the WHERE clause. Just wondering if there is a better way to write the query to speed it up. This might be a pretty ambiguous question, so I can explain further if you need more info.
SELECT b.BLCO# || RIGHT('00000' || b.BLACCT, 5) Acct, b.BLRECT Type, b.BLREC# Record, i.INAME || ' ' || i.INAME2 Desc, b.BLPPGM# Promo, SUBSTR(b.BLPEFFD, 4, 2) || SUBSTR(b.BLPEFFD, 6, 2) || SUBSTR(b.BLPEFFD, 2, 2) Eff, SUBSTR(b.BLPENDD, 4, 2) || SUBSTR(b.BLPENDD, 6, 2) || SUBSTR(b.BLPENDD, 2, 2) Exp, CASE
WHEN (p.PPPRC1 = '0') THEN (p.PPPRC2)
WHEN (p.PPPRC1 != '0') THEN (p.PPPRC1) END Price,
CASE
WHEN (p.PPPRC1 = '0') THEN (p.PPCST2)
WHEN (p.PPPRC1 != '0') THEN (p.PPCST1) END Allow, i.ILASTC Cost
FROM QS36F.BILLPRC b
LEFT JOIN QS36F.PROMO p
ON b.BLREC# || b.BLPPGM# = p.PPREC || p.PPPGM#
LEFT JOIN QS36F.ITEM i
ON CASE
WHEN (b.BLRECT = 'I') AND (b.BLREC# = i.IMFGR || i.ICOLOR || i.IPATT) THEN 1
WHEN (b.BLRECT = 'P') AND (b.BLREC# = i.IPRCCD) THEN 1
END = 1
WHERE (b.BLPPGM# != '') AND (b.BLSTS != 'H')
ORDER BY (b.BLACCT)
I have a feeling there isn't given the amount of records and CASEs being checked.
You may likely get better performance by changing your WHERE predicate (b.BLPPGM# != '') AND (b.BLSTS != 'H') by rewriting it in the join condition(s).
Try not to concatenate your fields used for joining, if possible. For example, instead of
LEFT JOIN QS36F.PROMO p
ON b.BLREC# || b.BLPPGM# = p.PPREC || p.PPPGM#
you could say
LEFT JOIN QS36F.PROMO p
ON b.BLREC# = p.PPREC
and b.BLPPGM# = p.PPPGM#
or equivalently
LEFT JOIN QS36F.PROMO p
ON (b.BLREC#, b.BLPPGM#) = (p.PPREC, p.PPPGM#)
adding in the logic from your WHERE clause
LEFT JOIN QS36F.PROMO p
ON (b.BLREC#, b.BLPPGM#) = (p.PPREC, p.PPPGM#)
Of course, this assumes that the corresponding fields are the same type, or compatible types, and ideally the fields will be the same size. DB2 can often automatically convert one type to another compatible type. String types are often compatible with each other, and numeric types often with each other.
Then, most importantly, make sure you have the right indexes available. (I'm not too familiar with the S/36 mode environment, but I'm assuming you can create indexes over it.) Indexes can also be built over a derived column, ie. an expression such as concatenation.
For reporting purposes, use a materialized query table, unless if you can't really afford the CPU time. I know it's a bit lazy, but MQTs are meant for when you are okay with getting for instance hourly updated results, and don't want to start normalizing and optimizing your database. You have a working logic, and you apparently understand it, so go with it.
This looks like it's DB2 for i....you might want to tag it appropriately
Assuming it is, have you used the "Run & Explain" option in iNav's Run SQL scripts to see if the DB recommends any new indexes?

Resources