lose less numbers when making a string from table SQL Server - string

I have a table with doubles like 0.681672875510799
so for example a dummy table:
a1 a2
-------------------------
1 0.681672875510799
NULL 1
NULL NULL
NULL NULL
NULL NULL
NULL NULL
When I do
DECLARE #CommaString nvarchar(max)
SET #CommaString = ''
SELECT #CommaString =
STUFF(
(SELECT ',' + CAST([col] AS VARCHAR) FROM (
SELECT [a1] AS col FROM [ta] UNION ALL
SELECT [a2] AS col FROM [ta]
) alldata FOR XML PATH('') ) , 1 , 1 , '' )
PRINT #CommaString;
This prints:
1,0.681673,1
so I am losing several decimals, which are also important, How do I modify the code to get
1,0.681672875510799,1 instead of 1,0.681673,1?

In your inner query:
SELECT ',' + CAST([col] AS VARCHAR)
FROM (
SELECT [a1] AS col FROM [ta]
UNION ALL
SELECT CAST([a2] AS DECIMAL(18,15)) AS col FROM [ta]
Casting the FLOAT to a DECIMAL works for me (SQL Server 2008 R2). You may have to tweak the (18,15) to work with your data.
Just noticed one more thing that works (and probably more consistently):
SELECT ',' + CONVERT(varchar(max), col, 128)
FROM (
SELECT [a1] AS col FROM [ta]
UNION ALL
SELECT [a2] AS col FROM [ta]

Your problem is that you are casting to varchar without specifying the size, you need to do CAST([col] AS VARCHAR(max)
DECLARE #CommaString nvarchar(max)
SET #CommaString = ''
SELECT #CommaString =
STUFF(
(SELECT ',' + cast(CAST( [col] as decimal(22,19)) as varchar(30)) FROM (
SELECT [a1] AS col FROM [#ta] UNION ALL
SELECT [a2] AS col FROM [#ta]
) alldata FOR XML PATH('') ) , 1 , 1 , '' )
PRINT #CommaString;
The problem is that you will get a lot of zeroes as decimals even for the integer values. You probably need to do some other transformation if you care about that.
EDIT: Including my table definition:
create table #ta
(
a1 int,
a2 float
)
EDIT: changed my table definition again from decimal to float for column b and added double casting in my query.
It now produces: 1.0000000000000000000,1.0000000000000000000,0.6816728755107990500

Related

Cassandra selecting by reverse order of clustering order

I wan't to select rows by the order of ASC and DESC but cassandra data orders are fixed .
I use ScyllaDB.
My imaginary scenario of problem :
I have a table :
CREATE TABLE tbl(A text , B text , C text , primary key(A,B,C))
After inserting datas my table is :
Now i want to select top 1 ( or x ) item of row ( A - B - 3 )
And after that select bottom 1 ( or x ) item of row ( A - B - 3 ).
C order is ASC and it's fixed !
now i try to select bottom 1 item :
SELECT * FROM tbl WHERE A='A' AND B='B' AND C > '3' LIMIT 1 ;
but selecting top of ( A-B-3 ) is my problem
SELECT * FROM tbl WHERE A='A' AND B='B' AND C < '3' ???
Is there any solution for selecting top of item in cassandra ?

Unpivot and Pivot does not return data

I'm trying to return data as columns.
I've written this unpivot and pivot query:
`select StockItemCode, barcode, barcode2 from (select StockItemCode, col+cast(seq as varchar(20)) col, value from (
select
(select min(StockItemCode)
from RTLBarCode t2
where t.StockItemCode = t2.StockItemCode) StockItemCode,
cast(BarCode as varchar(20)) barcode,
row_number() over(partition by StockItemCode order by StockItemCode) seq
from RTLBarCode t) d unpivot(
value
for col in (barcode) ) unpiv) src pivot ( max(value) for col in (barcode, barcode2)) piv;`
But the problem is only the "Barcode2" field are returning a value (the barcode field returns a null when in fact there is a value.
SAMPLE DATA
I have a Table called RTLBarCode
It has a field called Barcode and a field called StockItemCode
For StockItemCode = 10 I have 2 rows with a Barcode value of 5014721112824 and 0000000019149.
Can anyone see where I am going wrong?
Many thanks
You are indexing your barcode in unpiv.
This results in col's-values barcode1 and barcode2.
But then you are pivoting on barcode instead of barcode1. No value is found and the aggregate returns null.
The correct statement would be:
select StockItemCode, barcode1, barcode2 from
(
select StockItemCode, col+cast(seq as varchar(20)) col, value
from
(
select
(select min(StockItemCode)from RTLBarCode t2 where t.StockItemCode = t2.StockItemCode) StockItemCode,
cast(BarCode as varchar(20)) barcode,
row_number() over(partition by StockItemCode order by StockItemCode) seq
from RTLBarCode t
) d
unpivot(value for col in (barcode)) unpiv
) src
pivot (max(value) for col in (barcode1, barcode2)) piv

T-SQL, joining two tables between an integer column in the first table and a comma separated string column in the other

I have a situation like this (T-SQL):
Table 1: dbo.Printers
EmulationID EmulationDescription PrinterID Name
34,15,2 NULL 12 HP 1234
15,2 NULL 13 IBM 321
15 NULL 14 XYZ
Table 2: dbo.Emulations
EmulationID Description
34 HP
15 IBM
2 Dell
EmulationID column in dbo.Printers table is nvarchar/unicode string datatype, and integer datatype in the dbo.Emulations table.
Now I have to UPDATE the **EmulationDescription** column in the dbo.Printers table using a lookup on the dbo.Emulations table through the EmulationID column.
I need to get data like this in the dbo.Printers table:
EmulationID EmulationDescription PrinterID Name
34,15,2 HP,IBM,Dell 12 HP 1234
15,2 IBM,Dell 13 IBM 321
15 IBM 14 XYZ
Can someone help me in detail, on how to get this issue resolved ?
I created the user-defined function dbo.ParseIdListToTable to convert string data in one row into multiple rows. However, I do not know to proceed further, on how to exactly join and then update.
Any suggestion will be greatly appreciated.
You could do something like this:
CREATE FUNCTION [dbo].[CSVToTable] (#InStr VARCHAR(MAX))
RETURNS #TempTab TABLE
(id int not null)
AS
BEGIN
;-- Ensure input ends with comma
SET #InStr = REPLACE(#InStr + ',', ',,', ',')
DECLARE #SP INT
DECLARE #VALUE VARCHAR(1000)
WHILE PATINDEX('%,%', #INSTR ) <> 0
BEGIN
SELECT #SP = PATINDEX('%,%',#INSTR)
SELECT #VALUE = LEFT(#INSTR , #SP - 1)
SELECT #INSTR = STUFF(#INSTR, 1, #SP, '')
INSERT INTO #TempTab(id) VALUES (#VALUE)
END
RETURN
END
GO
DECLARE #Description VARCHAR(1000)
SELECT P.EmulationID,
(SELECT #Description = COALESCE(#Description + ',', '') + QUOTENAME(Description)
FROM dbo.Emulations
WHERE EmulationID IN (SELECT * FROM dbo.CSVToTable(P.EmulationID))) AS 'Emulation Description,
P.PrinterID,
P.Name
FROM dbo.Printers P

in sybase i need to convert rows values into single column of varchar type

Can you give me a query,that converts the rows values which are of type varchars into a single column with any delimiter.
e.g
table with 2 columns
col1 |col2
1 | m10
1 | m31
2 | m20
2 | m50
now i want output as
col1| col2
1|m10:m31
2|m20:m50
Do you always have matched pairs, no more no less?
select
col1,
count(*)
from table
group by col1
having count(*) <> 2
would give you zero results?
if so, you can just self join...
declare #delimiter varchar(1)
set #delimiter = :
select
t1.col1, t1.col2 + #delimiter + t2.col2
from tablename t1
inner join tablename t2
on t1.col1 = t2.col1
and t1.col2 <> t2.col2
One way to do that is using cursors.
With the cursor you can fetch a row at a time!
Pseudo-code would be:
if actual_col1 = last_col1
then col2_value = col2_value + actual_col2
else
insert into #temptable value(col1, col2_value)
col2_value = actual_col2
end
Check HERE to know how to use them.
use this solution :
SELECT list(col2, ':') as col2 FROM table_name group by col1 ;
Please use the below logic, the table #t1 will be the final table.
create table #t123(a char(2), b char(2))
go
create table #t1(a char(2), c char(100) default '')
go
Insert into #t123 values ('a','1')
Insert into #t123 values ('a','2')
Insert into #t123 values ('a','3')
Insert into #t123 values ('b','1')
Insert into #t123 values ('c','1')
Insert into #t123 values ('d','1')
Insert into #t123 values ('d','1')
go
insert into #t1 (a) Select distinct a from #t123
go
Select distinct row_id = identity(8), a into #t1234 from #t123
go
Declare #a int, #b int, #c int, #d int, #e int, #f char(2), #g char(2), #h char(2)
Select #a =min(row_id), #b=max(row_id) from #t1234
While #a <= #b
Begin
Select #f = a , #h = '', #g = '' from #t1234 where row_id = #a
Update #t1 set c = '' where a = #f
Select row_id = identity(8), b into #t12345 from #t123 where a = #f
Select #c =min(row_id), #d=max(row_id) from #t12345
While #c <= #d
begin
Select #g = b from #t12345 where row_id = #d
Update #t1 set c = #g +' '+ c where a = #f --change delimiter
Select #d = #d-1
End
Drop table #t12345
Select #a = #a+1
End
go
Select * from #t1 -- final table with transposed values

1 = 1 returns False in T-SQL - Why?

Please look at the snippet below
DEClaRE #p__linq__0 datetime
SET #p__linq__0 = '2012-02-01 00:00:00'
SELECT (STR( CAST( DATEPART (day, #p__linq__0) AS float)))
SELECT
InvoicingActivityStartDay,
(STR(CAST( DATEPART (day, #p__linq__0) AS float))),
CASE WHEN STR(CAST(DATEPART (day, #p__linq__0) AS float))= InvoicingActivityStartDay THEN 'EQUAL' ELSE 'NOT EQUAL' END
FROM INVOICEMETADATA
This was the rough SQL Translation of a Linq-to-Entities query I had in my application. The two possible values for InvoicingActivityStartDay are 1 and 20.
This snippet results in rows like this:
InvoicingActivityStartDay Column1 Column2
1 1 NOT EQUAL
20 1 NOT EQUAL
I understand why it returns NOT EQUAL for the second row; but why does it return NOT EQUAL for the first row where 1 = 1?
Is InvoicingActivityStartDay a string? SELECT STR(CAST(DATEPART (day, getdate()) AS float)) returns a string. Are you expecting an integer comparison?

Resources