PL/SQL Join Collection Object problems - object

I am working with an Oracle 11g database, release 11.2.0.3.0 - 64 bit production
I have written the following procedure which uses a cursor to collect batches of benefit_ids (which are simply of type NUMBER) from a table called benefit_info. For each benefit_id within each batch, I need to obtain the associated customers and then perform various calculations etc. So far I have the following:
CREATE OR REPLACE PROCEDURE ben_correct(in_bulk_collect_limit IN PLS_INTEGER DEFAULT 1000)
IS
TYPE ben_identity_rec IS RECORD
(
life_scd_id NUMBER,
benefit_id NUMBER
);
TYPE ben_identity_col IS TABLE OF ben_identity_rec INDEX BY PLS_INTEGER;
life_col ben_identity_col;
ben_id NUMBER;
CURSOR benefit_cur
IS
SELECT benefit_id FROM benefit_info;
TYPE benefit_ids_t IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
benefit_ids benefit_ids_t;
PROCEDURE get_next_set_of_incoming(out_benefit_ids OUT NOCOPY benefit_ids_t)
IS
BEGIN
FETCH benefit_cur
BULK COLLECT INTO out_benefit_ids
LIMIT in_bulk_collect_limit;
END;
BEGIN
OPEN benefit_cur;
LOOP
get_next_set_of_incoming(benefit_ids);
/*
The code below is too slow as each benefit_id is considered
individually. Want to change FOR LOOP into LEFT JOIN of benefit_ids
*/
FOR indx IN 1 .. benefit_ids.count LOOP
ben_id := benefit_ids(indx);
SELECT c.life_scd_id, c.benefit_id
BULK COLLECT INTO life_col
FROM customer c
WHERE c.benefit_id = ben_id;
-- Now do further processing with life_col
END LOOP;
EXIT WHEN benefit_ids.count = 0;
END LOOP;
CLOSE benefit_cur;
END;
/
As indicated in the code above, the FOR indx IN 1 .. LOOP is VERY slow, particularly as there are millions of benefit_ids. However, I am aware I can replace the entire FOR LOOP with something like:
SELECT c.life_scd_id, c.benefit_id
BULK COLLECT INTO life_col
FROM customer c
LEFT JOIN table(benefit_ids) b
WHERE b.benefit_id IS NOT NULL;
However, for that to work I think I need to declare an Object type at the schema level as I think in the SELECT query you can join on pure tables or collections of objects. Therefore, from the procedure I remove
TYPE benefit_ids_t IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
and instead at the schema level I have defined
CREATE OR REPLACE TYPE ben_id FORCE AS object
(
benefit_id number
);
CREATE OR REPLACE TYPE benefit_ids_t FORCE AS TABLE OF ben_id;
My revised code essentially becomes:
CREATE OR REPLACE PROCEDURE ben_correct(in_bulk_collect_limit IN PLS_INTEGER DEFAULT 1000)
IS
sql_str VARCHAR2(1000);
TYPE ben_identity_rec IS RECORD
(
life_scd_id NUMBER,
benefit_id NUMBER
);
TYPE ben_identity_col IS TABLE OF ben_identity_rec INDEX BY PLS_INTEGER;
life_col ben_identity_col;
CURSOR benefit_cur
IS
SELECT benefit_id FROM benefit_info;
--- benefit_ids_t has now been declared at schema level
benefit_ids benefit_ids_t;
PROCEDURE get_next_set_of_incoming(out_benefit_ids OUT NOCOPY benefit_ids_t)
IS
BEGIN
FETCH benefit_cur
BULK COLLECT INTO out_benefit_ids
LIMIT in_bulk_collect_limit;
END;
BEGIN
OPEN benefit_cur;
LOOP
get_next_set_of_incoming(benefit_ids);
sql_str := 'SELECT c.life_scd_id, c.benefit_id
FROM customer c
LEFT JOIN table(benefit_ids) b
WHERE b.benefit_id IS NOT NULL';
EXECUTE IMMEDIATE sql_str BULK COLLECT INTO life_col;
-- Now do further processing with life_col
EXIT WHEN benefit_ids.count = 0;
END LOOP;
CLOSE benefit_cur;
END;
/
However, this generates ORA-24344 and PLS-00386 errors, ie type mismatch found at 'OUT_BENEFIT_IDS' between FETCH cursor and INTO variables.
I sort of understand that it is complaining that benefit_ids_t is now a table of ben_ids, which are in turn objects of type number, which is in't quite the same as a table of numbers.
I've tried various attempts at resolving the issues, but I can't seem to quite get it right. Any help would be gratefully appreciated.
Also, any general comments to improve are welcome.

You don't need your table type to be of an object containing a number field, it can just be a table of numbers:
CREATE OR REPLACE TYPE benefit_ids_t FORCE AS TABLE OF number;
Or you can use a built-in type like sys.odcinumberlist, but having your own type under your control isn't a bad thing.
You don't want to use dynamic SQL though; this:
sql_str := 'SELECT c.life_scd_id, c.benefit_id
FROM customer c
LEFT JOIN table(benefit_ids) b
WHERE b.benefit_id IS NOT NULL';
EXECUTE IMMEDIATE sql_str BULK COLLECT INTO life_col;
won't work because benefit_ids isn't in scope when that dynamic statement is executed. You can just do it statically:
SELECT c.life_scd_id, c.benefit_id
BULK COLLECT INTO life_col
FROM table(benefit_ids) b
JOIN customer c
ON c.benefit_id = b.column_value;
which is closer to what you had in your original code.
Your EXIT is also in the wrong place - it will try to process rows in a loop when it doesn't find any. I wouldn't bother with the separate fetch procedure at all, it's easier to follow with the fetch directly in the loop:
BEGIN
OPEN benefit_cur;
LOOP
FETCH benefit_cur
BULK COLLECT INTO benefit_ids
LIMIT in_bulk_collect_limit;
EXIT WHEN benefit_ids.count = 0;
SELECT c.life_scd_id, c.benefit_id
BULK COLLECT INTO life_col
FROM table(benefit_ids) b
JOIN customer c
ON c.benefit_id = b.column_value;
-- Now do further processing with life_col
END LOOP;
CLOSE benefit_cur;
END;
If you did really want your object type, you could keep that, but you would need to make your cursor return instances of that object, via its default constructor:
CURSOR benefit_cur
IS
SELECT ben_id(benefit_id) FROM benefit_info;
The customer query join would then be:
SELECT c.life_scd_id, c.benefit_id
BULK COLLECT INTO life_col
FROM table(benefit_ids) b
JOIN customer c
ON c.benefit_id = b.benefit_id;
As it's an object type you can refer to it's field name, benefit_id, rather than the generic column_value from the scalar type table.

Related

How to use the cursor into a hand in memsql?

Hi Team how can we use cursor memsql,i did not saw any code using the cursor process in memesql procedure.
is there any other process to use cursor usage?
SingleStore DB (formerly known as MemSQL), has a COLLECT function where you can achieve read-only cursor functionality by calling COLLECT and iterating over the values in the resulting array. The array may be processed forwards, backwards or in an arbitrary order.
We have several use cases with sample SQL queries in our official documentation. I have included the first below as a reference:
Example 1: Using COLLECT with Static Queries
In the following example, COLLECT uses a query type variable, whose definition SELECT * from t is static.
DROP DATABASE IF EXISTS memsql_docs_example;
CREATE DATABASE memsql_docs_example;
USE memsql_docs_example;
CREATE TABLE t(id INT, name TEXT);
CREATE TABLE output_log(msg TEXT);
INSERT INTO t VALUES (1, 'red'), (2, 'green'), (3, 'blue');
DELIMITER //
CREATE OR REPLACE PROCEDURE p() AS
DECLARE
qry QUERY(id INT, name TEXT) = SELECT id, name FROM t;
arr ARRAY(RECORD(id INT, name TEXT));
_id INT;
_name TEXT;
BEGIN
arr = COLLECT(qry);
FOR x in arr LOOP
_id = x.id;
_name = x.name;
INSERT INTO output_log VALUES(CONCAT('[', _id, ', ', _name, ']'));
END LOOP;
END //
DELIMITER ;
CALL p();
SELECT * FROM output_log ORDER BY msg;
Cheers!
David, Education Delivery Specialist #
SingleStore DB (formerly MemSQL)

inserting a row to oracle table with a sequence in one query

I do not know which is the right way to inserting a new row to ORACLE 11g table with a new ID SEQUENCE. Should I write a trigger to my table something like this before insert operation :
CREATE OR REPLACE TRIGGER "myTableTrigger"
BEFORE INSERT ON "myTable" FOR EACH ROW
WHEN (NEW."Id" IS NULL OR NEW."Id" = 0) BEGIN
SELECT "myTable_SEQ".nextval INTO :NEW."Id" FROM dual;
END;
or should I get the new sequence ID with executescalar() then insert with a new ID? But this way executes 2 different query.
I do not know if there is any other option to do it? Can I do it in a one query?
It is common to use a trigger to do this, but in 11G you don't need the SELECT from DUAL:
CREATE OR REPLACE TRIGGER "myTableTrigger"
BEFORE INSERT ON "myTable" FOR EACH ROW
WHEN (NEW."Id" IS NULL OR NEW."Id" = 0) BEGIN
:NEW."Id" := "myTable_SEQ".nextval;
END;
Of course you don't have to use a trigger, you can just use the sequence in your insert statement:
insert into "MyTable" ("Id", ...) values ("myTable_SEQ".nextval, ...);

How to optimize DELETE .. NOT IN .. SUBQUERY in Firebird

I've this kind of delete query:
DELETE
FROM SLAVE_TABLE
WHERE ITEM_ID NOT IN (SELECT ITEM_ID FROM MASTER_TABLE)
Are there any way to optimize this?
You can use EXECUTE BLOCK for sequential scanning of detail table and deleting records where no master record is matched.
EXECUTE BLOCK
AS
DECLARE VARIABLE C CURSOR FOR
(SELECT d.id
FROM detail d LEFT JOIN master m
ON d.master_id = m.id
WHERE m.id IS NULL);
DECLARE VARIABLE I INTEGER;
BEGIN
OPEN C;
WHILE (1 = 1) DO
BEGIN
FETCH C INTO :I;
IF(ROW_COUNT = 0)THEN
LEAVE;
DELETE FROM detail WHERE id = :I;
END
CLOSE C;
END
(NOT) IN can usually be optimized by using (NOT) EXISTS instead.
DELETE
FROM SLAVE_TABLE
WHERE NOT EXISTS (SELECT 1 FROM MASTER_TABLE M WHERE M.ITEM_ID = ITEM_ID)
I am not sure what you are trying to do here, but to me this query indicates that you should be using foreign keys to enforce these kind of constraints, not run queries to cleanup the mess afterwards.

How can I return a CSV string from PL/SQL table type in Oracle

I have defined a table type PL/SQL variable and added some data there.
create or replace type varTableType as table of varchar2(32767);
my_table varTableType := varTableType()
...
my_table := some_function();
Now I have this my_table table type variable with several thousands of records.
I have to select only those records ending with specific character, say 'a' and get results in a comma separated string.
I think that COLLECT function could do this, but I do not understand exactly how.
I am using Oracle 10g.
Without getting into the question- why are you using a table type and not a table (or temporary table), you can do it like this:
declare
my_table varTableType;
i varchar2(32767);
begin
my_table := new
varTableType('bbbb', 'ccca', 'ddda', 'eee', 'fffa', 'gggg');
select trim(xmlagg(xmlelement(e, column_value || ','))
.extract('//text()'))
into i
from table(my_table)
where column_value like '%a';
dbms_output.put_line(i);
end;
There are more ways to concat rows- WM_CONCAT (if enabled) or LISTAGG (since 11g R2) but the basic idea of
select column_value
from table(my_table)
where column_value like '%a';
stays
There is another way without sql:
declare
my_table varTableType;
i varchar2(32767);
begin
my_table := new
varTableType('bbbb', 'ccca', 'ddda', 'eee', 'fffa', 'gggg');
FOR j IN my_table.first .. my_table.last LOOP
IF my_table(j) like '%a' THEN
i := i || my_table(j);
END IF;
END LOOP;
dbms_output.put_line(i);
end;
See this blog post by Tim Hall for a number of ways to do this, depending on Oracle version.

how to convert csv to table in oracle

How can I make a package that returns results in table format when passed in csv values.
select * from table(schema.mypackage.myfunction('one, two, three'))
should return
one
two
three
I tried something from ask tom but that only works with sql types.
I am using oracle 11g. Is there something built-in?
The following works
invoke it as
select * from table(splitter('a,b,c,d'))
create or replace function splitter(p_str in varchar2) return sys.odcivarchar2list
is
v_tab sys.odcivarchar2list:=new sys.odcivarchar2list();
begin
with cte as (select level ind from dual
connect by
level <=regexp_count(p_str,',') +1
)
select regexp_substr(p_str,'[^,]+',1,ind)
bulk collect into v_tab
from cte;
return v_tab;
end;
/
Alas, in 11g we still have to handroll our own PL/SQL tokenizers, using SQL types. In 11gR2 Oracle gave us a aggregating function to concatenate results into a CSV string, so perhaps in 12i they will provide the reverse capability.
If you don't want to create a SQL type especially you can use the built-in SYS.DBMS_DEBUG_VC2COLL, like this:
create or replace function string_tokenizer
(p_string in varchar2
, p_separator in varchar2 := ',')
return sys.dbms_debug_vc2coll
is
return_value SYS.DBMS_DEBUG_VC2COLL;
pattern varchar2(250);
begin
pattern := '[^('''||p_separator||''')]+' ;
select trim(regexp_substr (p_string, pattern, 1, level)) token
bulk collect into return_value
from dual
where regexp_substr (p_string, pattern, 1, level) is not null
connect by regexp_instr (p_string, pattern, 1, level) > 0;
return return_value;
end string_tokenizer;
/
Here it is in action:
SQL> select * from table (string_tokenizer('one, two, three'))
2 /
COLUMN_VALUE
----------------------------------------------------------------
one
two
three
SQL>
Acknowledgement: this code is a variant of some code I found on Tanel Poder's blog.
Here is another solution using a regular expression matcher entirely in sql.
SELECT regexp_substr('one,two,three','[^,]+', 1, level) abc
FROM dual
CONNECT BY regexp_substr('one,two,three', '[^,]+', 1, level) IS NOT NULL
For optimal performance, it is best to avoid using hierarchical (CONNECT BY) queries in the splitter function.
The following splitter function performs a good deal better when applied to greater data volumes
CREATE OR REPLACE FUNCTION row2col(p_clob_text IN VARCHAR2)
RETURN sys.dbms_debug_vc2coll PIPELINED
IS
next_new_line_indx PLS_INTEGER;
remaining_text VARCHAR2(20000);
next_piece_for_piping VARCHAR2(20000);
BEGIN
remaining_text := p_clob_text;
LOOP
next_new_line_indx := instr(remaining_text, ',');
next_piece_for_piping :=
CASE
WHEN next_new_line_indx <> 0 THEN
TRIM(SUBSTR(remaining_text, 1, next_new_line_indx-1))
ELSE
TRIM(SUBSTR(remaining_text, 1))
END;
remaining_text := SUBSTR(remaining_text, next_new_line_indx+1 );
PIPE ROW(next_piece_for_piping);
EXIT WHEN next_new_line_indx = 0 OR remaining_text IS NULL;
END LOOP;
RETURN;
END row2col;
/
This performance difference can be observed below (I used the function splitter as was given earlier in this discussion).
SQL> SET TIMING ON
SQL>
SQL> WITH SRC AS (
2 SELECT rownum||',a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z'||rownum txt
3 FROM DUAL
4 CONNECT BY LEVEL <=10000
5 )
6 SELECT NULL
7 FROM SRC, TABLE(SYSTEM.row2col(txt)) t
8 HAVING MAX(t.column_value) > 'zzz'
9 ;
no rows selected
Elapsed: 00:00:00.93
SQL>
SQL> WITH SRC AS (
2 SELECT rownum||',a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z'||rownum txt
3 FROM DUAL
4 CONNECT BY LEVEL <=10000
5 )
6 SELECT NULL
7 FROM SRC, TABLE(splitter(txt)) t
8 HAVING MAX(t.column_value) > 'zzz'
9 ;
no rows selected
Elapsed: 00:00:14.90
SQL>
SQL> SET TIMING OFF
SQL>
I don't have 11g installed to play with, but there is a PIVOT and UNPIVOT operation for converting columns to rows / rows to columns, that may be a good starting point.
http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-pivot.html
(Having actually done some further investigation, this doesn't look suitable for this case - it works with actual rows / columns, but not sets of data in a column).
There is also DBMS_UTILITY.comma_to_table and table_to_comma for converting CSV lists into pl/sql tables. There are some limitations (handling linefeeds, etc) but may be a good starting point.
My inclination would be to use the TYPE approach, with a simple function that does comma_to_table, then PIPE ROW for each entry in the result of comma_to_table (unfortunately, DBMS_UTILITY.comma_to_table is a procedure so cannot call from SQL).

Resources