coverting bigint into scientific number in hive - string

I am trying to convert bigint to scientific number in hive using cast function as below
select cast(805454539 as float) from table name;
Above query is giving me 805454528
However, I am looking for something like 8.05454539E8

To convert float to scientific notation string representation you can use printf() function (8 in this example - is the number of decimal places):
select printf('%1.8e',cast(805454539 as double))
result:
8.05454539e+08

Related

How to remove scientific notation while generating CSV files as output?

I had a delta table which is my input after reading that and generating a output as CSV file i see the scientific notation being displayed if the no of digits exceeds more than 7.
Eg:delta table has column value
a = 22409595 which is a double data type
O/P : CSV is being generated as
a= 2.2409595E7.
I have tried all the possible methods such as format_number,casting etc but unfortunately I haven't succeeded.Using format_number is only working if i had a single record in my output it is not working for multiple records.
Any help on this will appreciated ☺️ thanks in advance.
I reproduced this and able to remove the scientific notation in the dataframe by converting it to decimal in dataframe.
Please follow the below demonstration below:
This is my Delta table:
You can see, I have the numbers more than 7 in all columns.
Generating scientific notation in the dataframe:
Cast this to Decimal type in the dataframe by which you can specify the count of precision.
Give the count of maximum digits. Here, I have given 10, as my maximum number of digits of number is 10.
Save this dataframe as csv by which you can get the desired values.
My Source code:
%sql
CREATE TABLE table1 (col1 double,col2 double,col3 double);
insert into table1 values (22409595,12241226,17161224),(191919213,191919213,191919213);
%python
sqldf=spark.sql("select * from table1")
from pyspark.sql.types import *
for col in sqldf.columns:
sqldf=sqldf.withColumn(col, sqldf[col].cast(DecimalType(10, 0)))
sqldf.show()

How to cast String into int in PostgreSql

I want to cast string into Integer.
I have a table like this.
Have:
ID Salary
1 "$1,000"
2 "$2,000"
Want:
ID Salary
1 1000
2 2000
My query
Select Id, cast(substring(Salary,2, length(salary)) as int)
from have
I am getting error.
ERROR: invalid input syntax for type integer: "1,000"
SQL state: 22P02
Can anyone please provide some guidance on this.
Remove all non-digit characters, then you cast it to an integer:
regexp_replace(salary, '[^0-9]+', '', 'g')::int
But instead of trying to convert the value every time you select it, fix your database design and convert the column to a proper integer. Never store numbers in text columns.
alter table bad_design
alter salary type int using regexp_replace(salary, '[^0-9]+', '', 'g')::int;

how to compare numeric string in snowflake

in snowflake the number data type supports 38 digits length.
to store the values more than 38 digits ,i have used the varchar data type.
I have two set of tables ,
table1:
Block varchar
numberstart varchar
numberend varchar
table2
number varchar
i want see the block details from table1 where table2.number exist between table1.numberstart and table2.numberend
here i have two issues
string comparison giving wrong output.
i can not convert it into number using cast or to_number function, because the string values are more than 38 digits.
You could LPAD values to common size and then perform string comparison
SELECT*
FROM table2 t2
JOIN table1 t1
ON LPAD(t2.number, 50, '0') BETWEEN LPAD(t1.numberstart, 50, '0')
AND LPAD(t1.numberend, 50, '0');
db<>fiddle demo
It is simplified case, if negative numbers or fractions are involved it will not work.

How to convert FORMAT '--(37)9' from Teradata to Azure Synapse?

I have the following query with me:
SELECT T1.C1,
CAST((SUM(1) OVER (ORDER BY T1.C2 ROWS UNBOUNDED PRECEDING) + T2.C3 (FORMAT '--(37)9') AS VARCHAR(20) )) AS RESULT
FROM
T1
CROSS JOIN
T2;
What would be the equivalent for (FORMAT '--(37)9') in Azure Synapse Analytics?
This is a really weird query (including SUM(1) instead of COUNT(*))
It's returning an integer as a left aligned string.
Unless T2.C3 is a decimal with fractional digits you can simply remove the FORMAT and CAST to VARCHAR(20).
If T2.C3 has a fraction, CAST to BIGINT or DECIMAL(38,0) first.

Converting values in Athena Presto

I have a few things I want to accomplish with Presto. I currently getting some data in the following formats
date 16-Jan-2018
num 1000
I want to write a query that can convert these values to
2018-01-16
1,000
For the date you could do the following:
select date_parse('date 16-Jan-2018','date %d-%b-%Y')
For the second field, you would have to split it first with split(string, delimiter), then cast the second array element to INTEGER.
Here is the full answer:
SELECT date_parse(date_string,'date %d-%b-%Y') as parsed_date,
CAST(
split(int_string, ' ')[2] AS INTEGER
) as parsed_int
FROM (VALUES ('date 16-Jan-2018', 'int 1000'))
AS t(date_string, int_string)

Resources