in snowflake the number data type supports 38 digits length.
to store the values more than 38 digits ,i have used the varchar data type.
I have two set of tables ,
table1:
Block varchar
numberstart varchar
numberend varchar
table2
number varchar
i want see the block details from table1 where table2.number exist between table1.numberstart and table2.numberend
here i have two issues
string comparison giving wrong output.
i can not convert it into number using cast or to_number function, because the string values are more than 38 digits.
You could LPAD values to common size and then perform string comparison
SELECT*
FROM table2 t2
JOIN table1 t1
ON LPAD(t2.number, 50, '0') BETWEEN LPAD(t1.numberstart, 50, '0')
AND LPAD(t1.numberend, 50, '0');
db<>fiddle demo
It is simplified case, if negative numbers or fractions are involved it will not work.
Related
I had a delta table which is my input after reading that and generating a output as CSV file i see the scientific notation being displayed if the no of digits exceeds more than 7.
Eg:delta table has column value
a = 22409595 which is a double data type
O/P : CSV is being generated as
a= 2.2409595E7.
I have tried all the possible methods such as format_number,casting etc but unfortunately I haven't succeeded.Using format_number is only working if i had a single record in my output it is not working for multiple records.
Any help on this will appreciated ☺️ thanks in advance.
I reproduced this and able to remove the scientific notation in the dataframe by converting it to decimal in dataframe.
Please follow the below demonstration below:
This is my Delta table:
You can see, I have the numbers more than 7 in all columns.
Generating scientific notation in the dataframe:
Cast this to Decimal type in the dataframe by which you can specify the count of precision.
Give the count of maximum digits. Here, I have given 10, as my maximum number of digits of number is 10.
Save this dataframe as csv by which you can get the desired values.
My Source code:
%sql
CREATE TABLE table1 (col1 double,col2 double,col3 double);
insert into table1 values (22409595,12241226,17161224),(191919213,191919213,191919213);
%python
sqldf=spark.sql("select * from table1")
from pyspark.sql.types import *
for col in sqldf.columns:
sqldf=sqldf.withColumn(col, sqldf[col].cast(DecimalType(10, 0)))
sqldf.show()
I have the following query with me:
SELECT T1.C1,
CAST((SUM(1) OVER (ORDER BY T1.C2 ROWS UNBOUNDED PRECEDING) + T2.C3 (FORMAT '--(37)9') AS VARCHAR(20) )) AS RESULT
FROM
T1
CROSS JOIN
T2;
What would be the equivalent for (FORMAT '--(37)9') in Azure Synapse Analytics?
This is a really weird query (including SUM(1) instead of COUNT(*))
It's returning an integer as a left aligned string.
Unless T2.C3 is a decimal with fractional digits you can simply remove the FORMAT and CAST to VARCHAR(20).
If T2.C3 has a fraction, CAST to BIGINT or DECIMAL(38,0) first.
I have a table with a few key columns created with nvarchar(80) => unicode.
I can list the full dataset with SELECT * statement (Table1) and can confirm the values I need to filter are there.
However, I can't get any results from that table if I filter rows by using as input an alphabet char on any column.
Columns in table1 stores values in cyrilic characters.
I know it must have to do with character encoding => what I see in the result list is not what I use as input characters.
Unicode nvarchar type should resolve automatically this character type mismatch.
What do you suggest me to do in order to get results?
Thank you very much.
Paulo
I have a few things I want to accomplish with Presto. I currently getting some data in the following formats
date 16-Jan-2018
num 1000
I want to write a query that can convert these values to
2018-01-16
1,000
For the date you could do the following:
select date_parse('date 16-Jan-2018','date %d-%b-%Y')
For the second field, you would have to split it first with split(string, delimiter), then cast the second array element to INTEGER.
Here is the full answer:
SELECT date_parse(date_string,'date %d-%b-%Y') as parsed_date,
CAST(
split(int_string, ' ')[2] AS INTEGER
) as parsed_int
FROM (VALUES ('date 16-Jan-2018', 'int 1000'))
AS t(date_string, int_string)
This question already has answers here:
Oracle LISTAGG() for querying use
(2 answers)
Closed 9 years ago.
I have to pass string of numbers ( like 234567, 678956, 345678) to a stored procedure, the SP will split that string by comma delimiter and take each value ( eg: 234567) and do a look up in another table and get the corresponding value from another column and build a string.
For instance if have a table, TableA with 3 columns Column1, Column2, and Column3 with data as follows:
1 123456 XYZ
2 345678 ABC
I would pass a string of numbers to a stored procedure, for instance '123456', '345678'. It would then split this sting of numbers and take the first number - 123456 and do a look up in TableA and get the matching value from Column3 - i.e. 'XYZ'.
I need to loop through the table with split string of numbers ('12345', '345678') and return the concatenated string - like "XYZ ABC"
I am trying to do it in Oracle 11g.
Any suggestions would be helpful.
It's almost always more efficient to do everything in a single statement if at all possible, i.e. don't use a function if you can avoid it.
There is a little trick you can use to solve this using REGEXP_SUBSTR() to turn your string into something usable.
with the_string as (
select '''123456'', ''345678''' as str
from dual
)
, the_values as (
select regexp_substr( regexp_replace(str, '[^[:digit:],]')
, '[^,]+', 1, level ) as val
from the_string
connect by regexp_substr( regexp_replace(str, '[^[:digit:],]')
, '[^,]+', 1, level ) is not null
)
select the_values.val, t1.c
from t1
join the_values
on t1.b = the_values.val
This works be removing everything but the digits you require and something to split them on, the comma. You then split it on the comma and use a hierarchical query to turn this into a column, which you can then use to join.
Here's a SQL Fiddle as a demonstration.
Please note that this is highly inefficient when used on large datasets. It would probably be better if you passed variables normally to your function...