in sybase i need to convert rows values into single column of varchar type - sap-ase

Can you give me a query,that converts the rows values which are of type varchars into a single column with any delimiter.
e.g
table with 2 columns
col1 |col2
1 | m10
1 | m31
2 | m20
2 | m50
now i want output as
col1| col2
1|m10:m31
2|m20:m50

Do you always have matched pairs, no more no less?
select
col1,
count(*)
from table
group by col1
having count(*) <> 2
would give you zero results?
if so, you can just self join...
declare #delimiter varchar(1)
set #delimiter = :
select
t1.col1, t1.col2 + #delimiter + t2.col2
from tablename t1
inner join tablename t2
on t1.col1 = t2.col1
and t1.col2 <> t2.col2

One way to do that is using cursors.
With the cursor you can fetch a row at a time!
Pseudo-code would be:
if actual_col1 = last_col1
then col2_value = col2_value + actual_col2
else
insert into #temptable value(col1, col2_value)
col2_value = actual_col2
end
Check HERE to know how to use them.

use this solution :
SELECT list(col2, ':') as col2 FROM table_name group by col1 ;

Please use the below logic, the table #t1 will be the final table.
create table #t123(a char(2), b char(2))
go
create table #t1(a char(2), c char(100) default '')
go
Insert into #t123 values ('a','1')
Insert into #t123 values ('a','2')
Insert into #t123 values ('a','3')
Insert into #t123 values ('b','1')
Insert into #t123 values ('c','1')
Insert into #t123 values ('d','1')
Insert into #t123 values ('d','1')
go
insert into #t1 (a) Select distinct a from #t123
go
Select distinct row_id = identity(8), a into #t1234 from #t123
go
Declare #a int, #b int, #c int, #d int, #e int, #f char(2), #g char(2), #h char(2)
Select #a =min(row_id), #b=max(row_id) from #t1234
While #a <= #b
Begin
Select #f = a , #h = '', #g = '' from #t1234 where row_id = #a
Update #t1 set c = '' where a = #f
Select row_id = identity(8), b into #t12345 from #t123 where a = #f
Select #c =min(row_id), #d=max(row_id) from #t12345
While #c <= #d
begin
Select #g = b from #t12345 where row_id = #d
Update #t1 set c = #g +' '+ c where a = #f --change delimiter
Select #d = #d-1
End
Drop table #t12345
Select #a = #a+1
End
go
Select * from #t1 -- final table with transposed values

Related

Insert new rows, continue existing rowset row_number count

I'm attempting to perform some sort of upsert operation in U-SQL where I pull data every day from a file, and compare it with yesterdays data which is stored in a table in Data Lake Storage.
I have created an ID column in the table in DL using row_number(), and it is this "counter" I wish to continue when appending new rows to the old dataset. E.g.
Last inserted row in DL table could look like this:
ID | Column1 | Column2
---+------------+---------
10 | SomeValue | 1
I want the next rows to have the following ascending ids
11 | SomeValue | 1
12 | SomeValue | 1
How would I go about making sure that the next X rows continues the ID count incrementally such that the next rows each increases the ID column by 1 more than the last?
You could use ROW_NUMBER then add it to the the max value from the original table (ie using CROSS JOIN and MAX). A simple demo of the technique:
DECLARE #outputFile string = #"\output\output.csv";
#originalInput =
SELECT *
FROM ( VALUES
( 10, "SomeValue 1", 1 )
) AS x ( id, column1, column2 );
#newInput =
SELECT *
FROM ( VALUES
( "SomeValue 2", 2 ),
( "SomeValue 3", 3 )
) AS x ( column1, column2 );
#output =
SELECT id, column1, column2
FROM #originalInput
UNION ALL
SELECT (int)(x.id + ROW_NUMBER() OVER()) AS id, column1, column2
FROM #newInput
CROSS JOIN ( SELECT MAX(id) AS id FROM #originalInput ) AS x;
OUTPUT #output
TO #outputFile
USING Outputters.Csv(outputHeader:true);
My results:
You will have to be careful if the original table is empty and add some additional conditions / null checks but I'll leave that up to you.

DELETE FROM (SELECT ...) SAP HANA

How come this does not work and what is a workaround?
DELETE FROM
(SELECT
PKID
, a
, b)
Where a > 1
There is a Syntax Error at "(".
DELETE FROM (TABLE) where a > 1 gives the same syntax error.
I need to delete specific rows that are flagged using a rank function in my select statement.
I have now put a table immediately after the DELETE FROM and put WHERE restrictions on the DELETE and in a small series of self-joins of the table.
DELETE FROM TABLE1
WHERE x IN
(SELECT A.x
FROM (SELECT x, r1.y, r2.y, DENSE_RANK() OVER (PARTITION by r1.y, r2.y ORDER by x) AS RANK
FROM TABLE2 r0
INNER JOIN TABLE1 r1 on r0.x = r1.x
INNER JOIN TABLE1 r2 on r0.x = r2.x
WHERE r1.y = foo and r2.y = bar
) AS A
WHERE A.RANK > 1
)

How to select only one item from all search items?

I have a dataset as this:
Group Owner
ABC John
ABC
TTT
TTT
TTT
CBS Alen
CBS Tim
SGD
SGD
Now I need search the dataset to find all rows whose Owner are ALL empty, like TTT and SGD, (not ABC because it has a row whose owner is John). But I only need select one item not all of them (better the first one). How could I do this using c#?
Since you haven't specified a database, I'll use MySQL5.6 in sqlfiddle.com but the approach would most likely be similar in any relational database.
First, let's set up the schema:
create table x (grp varchar(10), ownr varchar(10), row int);
insert into x (grp, ownr, row) values ('abc', 'john', 1);
insert into x (grp, ownr, row) values ('abc', '', 2);
insert into x (grp, ownr, row) values ('ttt', '', 3);
insert into x (grp, ownr, row) values ('ttt', '', 4);
insert into x (grp, ownr, row) values ('ttt', '', 5);
insert into x (grp, ownr, row) values ('cbs', 'alan', 6);
insert into x (grp, ownr, row) values ('cbs', 'tim', 7);
insert into x (grp, ownr, row) values ('sgd', '', 8);
insert into x (grp, ownr, row) values ('sgd', '', 9);
The first step is to get a list of the groups that you don't want in the output. The query for that would be:
select distinct grp from x where ownr <> ''
grp
---
abc
cbs
So then you simply ask for all rows with a group other than those (I'll also order by row here):
select * from x where grp not in (
select distinct grp from x where ownr <> ''
) order by row
and that gets you all the other rows:
grp ownr row
--- ---- ---
ttt 3
ttt 4
ttt 5
sgd 8
sgd 9
Now here's where it becomes slightly unclear what you want. If you just want the first of that overall set, you can simply use a limiting clause such as:
select * from x where grp not in (
select distinct grp from x where ownr <> ''
) order by row limit 1
grp ownr row
--- ---- ---
ttt 3
If, however, you need the first of each group, it can be done with an aggregating clause as follows:
select grp, '' as ownr, min(row) as row
from x where grp not in (
select distinct grp from x where ownr <> ''
) group by grp
grp ownr row
--- ---- ---
sgd 8
ttt 3
Obviously I've made certain assumptions about things like:
what an empty owner is;
what you consider the "first" of each subset to be; and
which database you're using for a back end.
But the general approach should remain the same even if those assumptions need to be modified.

SQL String Concatenation

I have a table with columns A,B,C.My requirement is to concatenate the values of columns A and B, and save it into column C.
Note: All columns are of Varchar datatype.
For e.g:
If A = 100 and B = 200, C should be 100200
If A = 0 and B = 200, C should be 0200
If A = NULL AND B = NULL, C should be NULL
If A = NULL and B = 01, C should be 01
If A = 01 and B = NULL, C should be 01
Any ideas how this can be achieved using SQL?If only one of the column values is NULL, result should not be NULL.
What I have so far is:
select A+B C from myTable;
-- return non NULL value when concatenating NULL and non-NULL values
SET CONCAT_NULL_YIELDS_NULL OFF
-- prepare sample data
CREATE TABLE #t (
A varchar(15),
B varchar(15),
C varchar(15)
)
INSERT INTO #t (A, B) VALUES
('100', '200'),
('0', '200'),
(NULL, '200'),
(NULL, NULL),
(NULL, '01'),
('01', NULL)
-- concatenate data
UPDATE #t SET
C = A + B
-- show
SELECT * FROM #t
-- clean up
DROP TABLE #t
Maybe this will help:
Some test data:
DECLARE #tbl TABLE(A VARCHAR(10),B VARCHAR(10))
INSERT INTO #tbl
SELECT '100','200' UNION ALL
SELECT '0','200' UNION ALL
SELECT NULL, NULL UNION ALL
SELECT NULL,'01' UNION ALL
SELECT '01',NULL
The query:
SELECT
tbl.A,
tbl.B,
(
CASE
WHEN tbl.A IS NULL AND tbl.B IS NULL
THEN NULL
ELSE ISNULL(tbl.A,'')+ISNULL(tbl.B,'')
END
) AS C
FROM
#tbl AS tbl
declare #T table(A varchar(10), B varchar(10), C varchar(10))
insert into #T(A, B) values
('100' , '200'),
('0' , '200'),
( null , null),
( null , '01'),
('01' , null)
update #T
set C = case when A is not null or B is not null
then isnull(A,'')+isnull(B,'')
end

lose less numbers when making a string from table SQL Server

I have a table with doubles like 0.681672875510799
so for example a dummy table:
a1 a2
-------------------------
1 0.681672875510799
NULL 1
NULL NULL
NULL NULL
NULL NULL
NULL NULL
When I do
DECLARE #CommaString nvarchar(max)
SET #CommaString = ''
SELECT #CommaString =
STUFF(
(SELECT ',' + CAST([col] AS VARCHAR) FROM (
SELECT [a1] AS col FROM [ta] UNION ALL
SELECT [a2] AS col FROM [ta]
) alldata FOR XML PATH('') ) , 1 , 1 , '' )
PRINT #CommaString;
This prints:
1,0.681673,1
so I am losing several decimals, which are also important, How do I modify the code to get
1,0.681672875510799,1 instead of 1,0.681673,1?
In your inner query:
SELECT ',' + CAST([col] AS VARCHAR)
FROM (
SELECT [a1] AS col FROM [ta]
UNION ALL
SELECT CAST([a2] AS DECIMAL(18,15)) AS col FROM [ta]
Casting the FLOAT to a DECIMAL works for me (SQL Server 2008 R2). You may have to tweak the (18,15) to work with your data.
Just noticed one more thing that works (and probably more consistently):
SELECT ',' + CONVERT(varchar(max), col, 128)
FROM (
SELECT [a1] AS col FROM [ta]
UNION ALL
SELECT [a2] AS col FROM [ta]
Your problem is that you are casting to varchar without specifying the size, you need to do CAST([col] AS VARCHAR(max)
DECLARE #CommaString nvarchar(max)
SET #CommaString = ''
SELECT #CommaString =
STUFF(
(SELECT ',' + cast(CAST( [col] as decimal(22,19)) as varchar(30)) FROM (
SELECT [a1] AS col FROM [#ta] UNION ALL
SELECT [a2] AS col FROM [#ta]
) alldata FOR XML PATH('') ) , 1 , 1 , '' )
PRINT #CommaString;
The problem is that you will get a lot of zeroes as decimals even for the integer values. You probably need to do some other transformation if you care about that.
EDIT: Including my table definition:
create table #ta
(
a1 int,
a2 float
)
EDIT: changed my table definition again from decimal to float for column b and added double casting in my query.
It now produces: 1.0000000000000000000,1.0000000000000000000,0.6816728755107990500

Resources