Case when not equal in jooq - jooq

I have two case statements in an order by.
One sorts FIELDB DESC if FIELDA matches a pre-defined list of values. (Unrelated but since I see no way to replace WHEN IN ("val1", "val2"), I define each value as its own when).
queryBuilder.addOrderBy(case_(MYTABLE.FIELDA)
.when("val1", MYTABLE.FIELDB)
.when("val2", MYTABLE.FIELDB)
.desc());
However, I need to sort everything that doesn't match those values by FIELDB ASC
Problem is, I can't see a way to convert this sql to jooq:
CASE WHEN `mytable`.`fielda` != "val1" THEN `mytable`.`fieldb` END ASC

The case_(value) API is used to model the standard SQL <simple case>, not the <searched case>. Your expression produces:
CASE fielda WHEN val1 THEN fieldb WHEN val2 THEN fieldb END
It doesn't model what you seemingly intended (the <searched case>)
CASE WHEN fielda = val1 THEN fieldb WHEN fielda = val2 THEN fieldb END
Why not just use the API for the <searched case>, then, instead?
when(MYTABLE.FIELD.ne(val1), MYTABLE.FIELDB)
Both syntaxes are also documented in the manual.

Related

Reference join on a combined key in Azure Stream Analytics

I'm trying to filter data based on a reference table on a combined key. I acutally found a solution that seems to work:
SELECT
i.id
, i.timestamp
, i.PropertyName
, i.PropertyValue
FROM iothub AS i
LEFT JOIN Reference AS R
ON CONCAT(i.id, '|', 'i.PropertyName) = R.uid
WHERE R.keepIt = 1
But if I do this I get a warning that my query contains a JOIN with no key selector which will be translated into a CROSS JOIN.
I tested the method and it seems to result in the correct results, but I'm afraid that there may be side effects later on through a maybe CROSS JOIN. Or may I just ignore this Azure warning, as it does not apply in my case?
The CONCAT(i.id, '|', 'i.PropertyName) = R.uid is not a key selector, since left side of the equality is an expression and not a column reference.
So this will be translated to the CROSS JOIN followed by a filter as the warning suggests.
This is a warning and does not affect the functional correctness of the result.
You can project the expression as a column before doing reference data join and then it will be proper key lookup join. Here is what your example query will look like:
SELECT
i.id
, i.timestamp
, i.PropertyName
, i.PropertyValue
FROM (SELECT id, timestamp, PropertyName, PropertyValue,
uid = CONCAT(id, '|', PropertyName)
FROM iothub) AS i
LEFT JOIN Reference AS R
ON i.uid = R.uid
WHERE R.keepIt = 1
Of cause, the sub-select can also be put into a separate step.

jOOQ Query OrderBy as String

I'm getting the order by clause as a String from the application configuration.
Example
String orderByString = "NAME DESC, NUMBER ASC";
Now I want to use this order by in jOOQ query:
Result<KampagneRecord> records = repository.dsl()
.selectFrom(KAMPAGNE)
.orderBy(orderByString)
.fetch();
Unfortunately orderBy does not accept a String.
Is there a way to add the order by clause to the query?
You could use the fact that jOOQ does not validate your plain SQL templating, and just wrap your string in a DSL.field(String):
Result<KampagneRecord> records = repository.dsl()
.selectFrom(KAMPAGNE)
.orderBy(field(orderByString))
.fetch();
Of course, you will have to make sure that syntactical correctness is guaranteed, and SQL injection is prevented.
Some edge cases that rely on jOOQ being able to transform your SQL's ORDER BY clause might stop working, but in your simple query example, this would not apply.
An alternative solution, in very simple cases, is to preprocess your string. It seems as though this would work:
String orderByString = "NAME DESC, NUMBER ASC";
List<SortField<?>> list =
Stream.of(orderByString.split(","))
.map(String::trim)
.map(s -> s.split(" +"))
.map(s -> {
Field<?> field = field(s[0]);
return s.length == 1
? field.sortDefault()
: field.sort("DESC".equalsIgnoreCase(s[1])
? SortOrder.DESC
: SortOrder.ASC
);
})
.collect(Collectors.toList());
System.out.println(list);
This list can now be passed to the orderBy() clause.

How to aggregate a field in BQL for complex query

I have a BQL query joining three tables as follows:
foreach (PXResult<GLTran, Branch, xTACOpenSourceDetail> rec in
PXSelectJoin<GLTran,
InnerJoin<Branch,
On<GLTran.branchID, Equal<Branch.branchID>>,
InnerJoin<xTACOpenSourceDetail,
On<Branch.branchCD, Equal<xTACOpenSourceDetail.string03>,
And<xTACOpenSourceDetail.openSourceName, Equal<Constants.openSourceName>,
And<xTACOpenSourceDetail.dataID, Equal<Constants.privateer>>>>>>,
Where<Branch.branchCD, NotEqual<Required<Branch.branchCD>>,
And<GLTran.posted, Equal<True>,
And<GLTran.ledgerID, Equal<Required<GLTran.ledgerID>>,
And<GLTran.tranDate, GreaterEqual<Required<GLTran.tranDate>>>>>>,
OrderBy<Asc<xTACOpenSourceDetail.string01, Asc<GLTran.batchNbr>>>>.Select(Base, osdBranch.String03, ledger.LedgerID, tacsmlm.Date01))
I want to add one aggregated field, namely the sum of the GLTran.CuryDebitAmt grouped by GLTran.BatNbr and Branch.BranchCD.
I can easily do this in SQL using the SUM OVER functionality as follows:
SELECT SUM(GLTran.CuryDebitAmt) OVER (PARTITION BY GLTran.BatchNbr, Branch.BranchCD) as 'BatchTotal'
,GLTran.*
,Branch.*
,xTACOpenSourceDetail.*
FROM GLTran
Inner Join Branch
On GLTran.branchID = Branch.branchID
AND Branch.CompanyID = GLTran.CompanyID
Inner Join xTACOpenSourceDetail
On Branch.branchCD = xTACOpenSourceDetail.string03
And xTACOpenSourceDetail.openSourceName = 'TAC FM Map Company Branch'
And xTACOpenSourceDetail.dataID = 'Privateer'
AND xTACOpenSourceDetail.CompanyID = GLTran.CompanyID
Where Branch.branchCD <> '000 0000'
And GLTran.posted = 1
And GLTran.ledgerID = 6
And GLTran.tranDate >= '08/03/2017'
AND GLTran.CompanyID = 2
Order
By xTACOpenSourceDetail.string01 ASC
,GLTran.batchNbr ASC
...but I have no idea how to add this single summed field in BQL. Any help is appreciated.
You will use a PXSelectGroupBy and in your Aggreate for the BQL indicate which fields will "SUM" their values. Any field not called out will be the MAX value.
If you search SUM< in the Acumatica source you can find plenty of BQL examples. Here is a BQL Example from ARPaymentEntry. Only two (curyAdjdAmt & adjAmt) fields will contain a SUM while all other fields returned will be the MAX.
SOAdjust other = PXSelectGroupBy<SOAdjust,
Where<SOAdjust.voided, Equal<False>,
And<SOAdjust.adjdOrderType, Equal<Required<SOAdjust.adjdOrderType>>,
And<SOAdjust.adjdOrderNbr, Equal<Required<SOAdjust.adjdOrderNbr>>,
And<
Where<SOAdjust.adjgDocType, NotEqual<Required<SOAdjust.adjgDocType>>,
Or<SOAdjust.adjgRefNbr, NotEqual<Required<SOAdjust.adjgRefNbr>>>>>>>>,
Aggregate<GroupBy<SOAdjust.adjdOrderType,
GroupBy<SOAdjust.adjdOrderNbr,
Sum<SOAdjust.curyAdjdAmt,
Sum<SOAdjust.adjAmt>>>>>>.Select(this, adj.AdjdOrderType, adj.AdjdOrderNbr, adj.AdjgDocType, adj.AdjgRefNbr);
Another alternative solution for your question would be to create a PXProjection which could be the sum by group values which you then in your regular select include the projection table vs the base table in your BQL. I don't know the performance benefits vs one or the other - just another option.

SQLAlchemy: Referencing labels in SELECT subqueries

I'm trying to figure out how to replicate the below query in SQLAlchemy
SELECT c.company_id AS company_id,
(SELECT policy_id FROM associative_table at WHERE at.company_id = c.company_id) AS policy_id_ref,
(SELECT `default` FROM policy p WHERE p.policy_id = policy_id_ref) AS `default`,
FROM company c;
Note that this is a stripped down, basic example of what I'm really dealing with. The actual schema supports data and relationship versioning that requires the subqueries to include additional conditions, sorting, and limiting, making it impractical (if not impossible) for them to be joins.
The crux of the problem is in how the second subquery relies on policy_id_ref -- the value obtained from the first subquery. In SQLAlchemy, this is effectively what I have now:
ct = aliased(classes.company)
at = aliased(classes.associative_table)
pt = aliased(classes.policy)
policy_id_ref = session.query(at.policy_id).\
filter(at.company_id == ct.company_id).\
label('policy_id_ref')
policy_default = session.query(pt.default).\
filter(pt.id == 'policy_id_ref').\
label('default')
query = session.query(ct.company_id,policy_id_ref,policy_default)
The pull from the "company" table works fine as does the first subquery that retrieves the "policy_id_ref" column. The problem is the second subquery that has to reference that "policy_id_ref" column. I don't know how to write its filter in such a way that it literally renders "policy_id_ref" in the resulting query, to match the label of the first subquery.
Suggestions?
Thanks in advance
You can write your query as
select(
Companies.company_id,
AssociativeTable.policy_id.label('policy_id_ref'),
Policy.default.label('policy_default'),
).select_from(
Companies,
).join(
AssociativeTable,
AssociativeTable.company_id == Companies.company_id,
).join(
Policy,
AssociativeTable.policy_id == Policy.id
)
but in case you need reference to label from subquery => use literal_column
from sqlalchemy import func, select, literal_column
session.query(
func.array_agg(
literal_column('batch_info'),
JSONB
).label('history')
).select_from(
select(
func.jsonb_build_object(
'batch_id', AccountingQueueBatch.id,
'batch_label', AccountingQueueBatch.label,
).label('batch_info')
).select_from(
AccountingQueueBatch,
)
)

Is it legit to store CQL tuples with null components in Cassandra 3.x

I have to store a protocol buffer structure in Cassandra 3.x. It is defined in a .proto file as:
message Attribute
{
required string key = 1;
oneof value {
int64 integerValue = 2;
float floatValue = 3;
string stringValue = 4;
}
}
To store multiple Attributes I was thinking about this CQL definition.
CREATE TABLE ... attributes: map<text, tuple<int, float, text> ...
and in each tuple 2 of 3 components would actually be null. I haven't tested this syntax yet but are there any downsides using this approach? Maybe there is a better way, i.e. User Defined Types?
Let's try this out. I'll start with a simple table, containing a valuemap column of type map<text,tuple<int,float,text> as you have above:
CREATE TABLE tupleTest (
key text,
value text,
valuemap map<text, FROZEN<tuple<int,float,text>>>,
PRIMARY KEY (key));
I'll INSERT some data:
INSERT INTO tupletest (key,value,valuemap) VALUES ('1','A',{'a':(0,0.0,'hi')});
INSERT INTO tupletest (key,value,valuemap) VALUES ('2','B',{'b':(0,null,'hi')});
INSERT INTO tupletest (key,value,valuemap) VALUES ('3','C',{'c':(null,null,'hi')});
And then I'll SELECT it, just to see:
aploetz#cqlsh:stackoverflow> SELECT * FROM tupletest ;
key | value | valuemap
-----+-------+---------------------------
3 | C | {'c': (None, None, 'hi')}
2 | B | {'b': (0, None, 'hi')}
1 | A | {'a': (0, 0, 'hi')}
(3 rows)
The main apprehension about explicitly INSERTing NULL values into Cassandra, is that in "normal" columns they actually create tombstones. But since we are not setting an entire column to NULL, merely an element in a tuple (nested inside a map), this is not the case. In fact, they are showing as None. And when I view the underlying SSTables, I also do not see evidence that a tombstone has been written.
Normally, I'd say that explicitly INSERTing a NULL into Cassandra is a terrible, terrible idea. But in this case, it shouldn't cause you any issues. Now, as to whether or not this is considered to be "legit" or a good practice...well, my data modeling senses do not approve. I would find another way to represent the absence of a value in a tuple type, as someone (the developer who follows you) could see this and interpret that as being "ok" to explicitly INSERT NULLs into other column values.

Resources