Acumatica - Multiple times Printing of Report Output - acumatica

I have a Sales Order Printing Report. I have Order Number Parameter. Whatever order number we select it will get Print in PDF. Now only one copy it is Printing. But i need to print same page as two times in that PDF output.
Can anybody please help me on how to achieve this.

Add a number view, create a DAC for that, and cross join to it. Use formula/parameter to restrict
Note I found MUST explicitly cast to an integer, else get run time error in Report interpreter
e.g. SQL Server allows a view such as:
Create View NumberView as
-- This CTE is creating the list of numbers from 1 to POWER(POWER(POWER(POWER(2, 2), 2), 2), 2), i.e. until 65536.
WITH L0 AS
(SELECT 1 AS c
UNION ALL
SELECT 1
)
,
L1 AS
(SELECT 1 AS c
FROM L0 AS A,
L0 AS B
)
,
L2 AS
(SELECT 1 AS c
FROM L1 AS A,
L1 AS B
)
,
L3 AS
(SELECT 1 AS c
FROM L2 AS A,
L2 AS B
)
,
L4 AS
(SELECT 1 AS c
FROM L3 AS A,
L3 AS B
)
,
Numbers AS
(SELECT ROW_NUMBER() OVER(ORDER BY c) AS NUMBER
FROM L4
)
SELECT CONVERT(INT,Numbers.NUMBER) Nbr,
CONVERT(CHAR(1),'') AS TstVal
FROM Numbers
GO

Related

How to include ' partition by ' in TD15 Pivot function?

Right now I'm having query like this -
SELECT a, b,
SUM (CASE WHEN measure_name = 'ABC' THEN measure_qty END) OVER (PARTITION BY a, b ) AS ABCPIVOT
FROM data_app.work_test
Now as TD15 is supporting direct PIVOTING.
How do I include this partition by in PIVOT function?

DELETE FROM (SELECT ...) SAP HANA

How come this does not work and what is a workaround?
DELETE FROM
(SELECT
PKID
, a
, b)
Where a > 1
There is a Syntax Error at "(".
DELETE FROM (TABLE) where a > 1 gives the same syntax error.
I need to delete specific rows that are flagged using a rank function in my select statement.
I have now put a table immediately after the DELETE FROM and put WHERE restrictions on the DELETE and in a small series of self-joins of the table.
DELETE FROM TABLE1
WHERE x IN
(SELECT A.x
FROM (SELECT x, r1.y, r2.y, DENSE_RANK() OVER (PARTITION by r1.y, r2.y ORDER by x) AS RANK
FROM TABLE2 r0
INNER JOIN TABLE1 r1 on r0.x = r1.x
INNER JOIN TABLE1 r2 on r0.x = r2.x
WHERE r1.y = foo and r2.y = bar
) AS A
WHERE A.RANK > 1
)

PostgreSQL - Returning the results of multiple arbitrary sub-queries

Like the title of the question suggests, I'm attempting take a number of arbitrary sub-queries and combine them into a single, large query.
Ideally, I'd like to the data to be returned as a single record, with each column being the result of one of the sub-queries. E.G.
| sub-query 1 | sub-query 2 | ...
|-----------------|-----------------|-----------------
| (array of rows) | (array of rows) | ...
The sub-queries themselves are built using Knex.js in a Node app and are completely arbitrary. I've come fairly close to a proper solution, but I've hit a snag.
My current implementation has the final query like so:
SELECT
array_agg(sub0.*) as s0,
array_agg(sub1.*) as s1,
...
FROM
(...) as sub0,
(...) as sub1,
...
;
Which mostly works, but causes huge numbers of duplicates in the output. During my testing, I found that it returns records such each record is duplicated a number of times equal to how many records would have been returned without the duplicates. For example, a sub-query that should return 10 records would, instead, return 100 (each record being duplicated 10 times).
I've yet to figure out why this occurs or how to fix the query to not get the issue.
So far, I've only been able to determine that:
The number of records returned by the sub-queries is correct when queried separately
The duplicates are not caused by intersections between the sub-queries
i.e. sub-queries contain rows that exist in other sub-queries
Thanks in advance.
Just place the arbitrary queries in the select list:
with sq1 as (
values (1, 'x'),(2, 'y')
), sq2 as (
values ('a', 3), ('b', 4), ('c', 5)
)
select
(select array_agg(s.*) from (select * from sq1) s) as s0,
(select array_agg(s.*) from (select * from sq2) s) as s1
;
s0 | s1
-------------------+---------------------------
{"(1,x)","(2,y)"} | {"(a,3)","(b,4)","(c,5)"}
Also you can add row_number to sub-queries and use that column to outer join tables (instead of cross join):
SELECT
array_agg(sub0.*) as s0,
array_agg(sub1.*) as s1
FROM
(SELECT row_number() OVER (), * FROM (VALUES (1, 'x'),(2, 'y')) t) as sub0
FULL OUTER JOIN
(SELECT row_number() OVER (), * FROM (VALUES ('a', 3), ('b', 4), ('c', 5)) t1) as sub1
ON sub0.row_number=sub1.row_number
;
s0 | s1
----------------------------+---------------------------------
{"(1,1,x)","(2,2,y)",NULL} | {"(1,a,3)","(2,b,4)","(3,c,5)"}
(1 row)

Computing skew in Hive?

This paper defines sample skew as
s = E[X-E(X)]^3 / [Var(X)]^3/2
What's the easiest way to compute this in Hive?
I imagine a two pass algorithm: 1 gets E(X) and Var(X), the other computes E[X-(X)]^3 and rolls it up.
I think you are on the right track with a two step approach, especially if you are strictly using Hive. Here is one way to accomplish this in two steps or one query and one subquery:
Calculate E(X) using the OVER () clause so we can avoid aggregating the data (this is so we can later calculate E[X-E(X)]):
select x, avg(x) over () as e_x
from table;
Using the above as a subquery, calculate Var(x) and E[X-E(X)] which will aggregate the data and produce the final statistic:
select pow(avg(x - e_x), 3)/sqrt(pow(variance(x), 3))
from (select x, avg(x) over () as e_x
from table) tb
;
The above formula isn't correct at least for Pearson's skew.
The following works at least with Impala:
with d as (select somevar as x from yourtable where what>2),
agg as (select avg(x) as m,STDDEV_POP(x) as s,count(*) as n from d),
sk as (select avg(pow(((x-m)/s),3)) as skew from d,agg)
select skew,m,s,n from agg,sk;
I tested it via:
with dual as (select 1.0 as x),
d as (select 1*x as x from dual union select 2*x from dual union select 4*x from dual union select 8*x from dual union select 16*x from dual union select 32*x from dual), -- This generates 1,2,4,8,16,32
agg as (select avg(x) as m,STDDEV_POP(x) as s,count(*) as n from d),
sk as (select avg(pow(((x-m)/s),3)) as skew from d,agg)
select skew,m,s,n from agg,sk;
And it gives the same answer as R:
require(moments)
skewness(c(1,2,4,8,16,32)) #gives 1.095221
See https://en.wikipedia.org/wiki/Skewness#Pearson.27s_moment_coefficient_of_skewness

How to compute median value of sales figures?

In MySQL while there is an AVG function, there is no Median. So I need to create a way to compute Median value for sales figures.
I understand that Median is the middle value. However, it isn't clear to me how you handle a list that isn't an odd numbered list. How do you determine which value to select as the Median, or is further computation needed to determine this? Thanks!
I'm a fan of including an explicit ORDER BY statement:
SELECT t1.val as median_val
FROM (
SELECT #rownum:=#rownum+1 as row_number, d.val
FROM data d, (SELECT #rownum:=0) r
WHERE 1
-- put some where clause here
ORDER BY d.val
) as t1,
(
SELECT count(*) as total_rows
FROM data d
WHERE 1
-- put same where clause here
) as t2
WHERE 1
AND t1.row_number=floor(total_rows/2)+1;

Resources