Converting decimal to specific scientific notation format - python-3.x

I'm trying to convert decimal to a specific scientific notation format:
-2.802479940 ==> -.2802479940E+01
3.796137791 ==> 0.3796137791E+01
And so on. Basically, No leading zero on negative numbers and a leading zero
on positive numbers. I've been able to print out in the big E scientific notation
but have no idea how to force the leading part to be a '-' or a '0' only.

This is kind of ugly (there aren't any built-in string-formatting options for it as far as I know), but it should do what you want it to do:
import re
nums = [-2.802479940, 3.796137791, -0.012, 0.035, 0, 100, -200]
decimal_digits = 10
for n in nums:
num = ('{:.'+str(decimal_digits-1)+'E}').format(n)
# increment exponent
exp_search = re.search(r'E(\+|-)\d+', num).group()
sign = exp_search[1]
exp = int(exp_search[1:])
new_exp = exp + 1
exponent = 'E'+sign+'{:02}'.format(abs(new_exp))
# move decimal place over
sig_search = re.search(r'-?\d\.\d+', num).group()
stripped = sig_search.replace('.', '')
if stripped[0] == '-':
significand = stripped.replace('-', '-.')
else:
significand = '0.' + stripped
# glue significand and exponent back together
new_num = significand + exponent
print('{:>20}{:>20}'.format(num, new_num))
assert float(num) == float(new_num)
Output:
-2.802479940E+00 -.2802479940E+01
3.796137791E+00 0.3796137791E+01
-1.200000000E-02 -.1200000000E-01
3.500000000E-02 0.3500000000E-01
0.000000000E+00 0.0000000000E+01
1.000000000E+02 0.1000000000E+03
-2.000000000E+02 -.2000000000E+03
This doesn't perform any arithmetic, just string operations, so it shouldn't introduce any issues with floating-point representations or whatever.

Here's C code to do what you want:
void numfmt(double d, char* buf) {
int exp=0;
int cont=1;
char sign = '0';
if (d < 0) {
sign = '-';
d = -d;
}
while(cont) {
if (d >= 1.0) {
exp ++;
d /= 10.0;
} else if (d < 0.1) {
exp --;
d *= 10.0;
} else
cont = 0;
}
sprintf(buf,"%12.10fE%+03d",d,exp);
*buf = sign;
}

To convert any number into scientific notation you need to know two things, the exponent and the decimal part. like this: Number = Dec E exponent (or Number = Dec * 10^exponent)
Log10() function will be really important then, cause it will give you the exact exponent of Number! Then to get the exponent you have this relation:
exponent = log10(Number/Dec)
but not knowing the Dec part, we will simplify into this relation:
exponent = log10(Number)
this will not give the integer exponent, only the exponent for which 1/10^exponent = Number. To get the right integer exponent you will need to get the first lowest integer with the floor() function.
That's why you'll need to change the relation to:
exponent = int(floor(log10(Number))) # using int will convert any floating value into an integer exponent
Then using this exponent you could easily find the decimal part with the relation:
Dec = Number / 10^exponent
Now you have a way to change any number* into scientific notation, if you need to have a leading zero, you then just have to decrement the exponent ^^
* Log(x) is defined on R*+ so if you have a negative number you must convert it into a positive one before passing it as an argument to log10(), and if the number is 0, then you should make an exception (return dec = 0 and exponent = 0)
this is an example of code in python:
Decompose(originalvalue):
calcul = originalvalue;
if (originalvalue != 0.0):
if (originalvalue < 0):
calcul = -calcul;
exponent = int(floor(log10(calcul))); # get the integer exponent
value = originalvalue / 10.0**exponent; # get scientific decimal part
# then to have a leading zero, you just have to verifiy if dec >= 1 or not, and in this case, modify the value and exponent to eliminate it.
if (value >= 1):
value *= 10**-1
exponent += 1
else:
exponent = 0;
value = 0.0;
return [value, exponent];

Here is a solution I came up with, having the same problem.
def converter(number):
number_str = '{:.14e}'.format(number)
number_split = number_str.split('.')
int_part = number_split[0]
exp_part = number_split[1][-3:]
dec_part = number_split[1][:-6]
int_part_numeric = int(int_part)
if number == 0:
return '0.000000000000E+00'
if int_part_numeric == 0:
sign = '0'
new_exp_part = exp_part
new_dec_part = dec_part
elif int_part_numeric < 0:
sign = '-'
new_exp_part = '{:+03d}'.format(int(exp_part) + 1)
new_dec_part = int_part.strip('-') + dec_part[:-1]
elif int_part_numeric > 0:
sign = '0'
new_exp_part = '{:+03d}'.format(int(exp_part) + 1)
new_dec_part = int_part + dec_part[:-1]
return sign + '.' + new_dec_part + 'E' + new_exp_part
It could probably be simplified a bit though...

Related

fmt::format how to always print first n non-zero digits with leading zeroes

Given three floating point numbers
a = 1.0
b = 1.23456
c = 1.23456e-12
How can I always print the first n non-zero digits, with leading zeroes, if required. Desired output for n=6:
1.00000
1.23456
0.00000000000123456
I tried fmt::format("{:.6f"}", number) but this returns 0.00000 in the last case.
There is no built-in way of doing this but you could do it in several steps:
Format using the exponent notation (e).
Extract the exponent.
Format using the fixed notation (f) adjusting the precision to account for the exponent.
For example:
#include <fmt/core.h>
void print(double n) {
auto s = fmt::format("{:e}", n);
auto exp = atoi(s.substr(s.find('e') + 1).c_str());
int precision = 6;
if (exp < 0) precision -= exp;
fmt::print("{:.{}f}\n", n, precision - 1);
}
int main() {
print(1.0);
print(1.23456);
print(1.23456e-12);
}
This prints:
1.00000
1.23456
0.00000000000123456
https://godbolt.org/z/16n8s7TG3

Signed float to hexadecimal number

How to convert float to a specific format in hexadecimal:
1 bit for sign, 15 bit for the integer value, and the rest 16 bits for values after the decimal point.
Example output should be ffff587a for -0.6543861, fff31a35 for -12.897631, 006bde10 for 107.8674316, 003bd030 for 59.8132324
I have written a program that can do the unsigned conversion, I am stuck at the signed part. Could anyone guide me on how I can achieve this in a very compact way?
def convert(num):
binary2 = ""
Int = int(num)
fract = num - Int
binary = '{:16b}'.format(Int & 0b1111111111111111)
for i in range (16):
fract *= 2
fract_bit = int(fract)
if fract_bit == 1:
fract -= fract_bit
binary2 += '1'
else:
binary2 += '0'
return int(binary + binary2, 2)
value = 107.867431640625
x = convert(value)
hex(x)
output: 0x6bde10
This is simply the Q16.16 fixed-point format. To convert a floating-point number to this format, simply multiply it by 216 (in Python, 1<<16 or 65536) and convert the product to an integer:
y = int(x * (1<<16))
To show its 32-bit two’s complement representation, add 232 if it is negative and then convert it to hexadecimal:
y = hex(y + (1<<32 if y < 0 else 0))
For example, the following prints “0xfff31a35”:
#!/usr/bin/python
x=-12.897631
y = int(x * (1<<16))
y = hex(y + (1<<32 if y < 0 else 0))
print(y)
This conversion truncates. If you want rounding, you can add .5 inside the int or you can add additional code for other types of rounding. You may also want to add code to handle overflows.

Is there a more efficient way to reverse a integer number (both positive and negative) in SCALA?

I am making a program in SCALA that takes a integer number and reverses it. For example, an input of 30 returns an output of 3. This program must also work for negative numbers, For instance, an input of -89 returns an output of -98. Also, if in the reversal the first digit is 0, it should be truncated (30 to 3). This is the code I have written.
import io.StdIn._
val twoDigitNumber : Int = takeInput()
println("The reversal is " + reverse(twoDigitNumber))
//define a function name reverse to handle the actual reverse process for -ve and +ve numbers
def reverse(x: Integer): Integer = {
//4 possibilities: +ve, 1st digit 0; -ve, 1st digit zero; -ve, 1st digit not zero; +ve, 1st digit not zero
if (x> 0 && x.toString.reverse.charAt(0) == 0) {
x.toString.reverse.substring(1).toInt
} else if (x<0 && x.toString.substring(1).reverse.charAt(0) == 0) {
('-' + x.toString.substring(1).reverse.substring(1)).toInt
} else if (x<0 && x.toString.substring(1).reverse.charAt(0)!= 0) {
('-'+ x.toString.substring(1).reverse).toInt
} else {
x.toString.reverse.toInt
}
}
//reads an integer number
def takeInput() : Int ={
print("Enter a two-digit integer number: ")
readInt()
}
Is there a more efficient way to do this?
The shortest I found:
x.signum * x.abs.toString.reverse.toInt
It can be like below considering x is your integer input:
val reverseOutput = if (x>0) x.toString.reverse.toInt else -1* ((x * -1).toString.reverse.toInt)
def reverseANumber(n: Int): Int = {
def _reverseANumber(i: Int, i1: Int): Int = i match
case 0 => i1
case i =>
val n = i % 10
val n1 = n * math.pow(10, (((math.log10(i) + 1).toInt) - 1)).toInt
_reverseANumber(i / 10, i1 + n1)
_reverseANumber(n, 0)
}

Method to convert binary number to decimal

I tried to write a program that converts a binary number to a decimal but am getting a number of errors and I cannot figure out where I am going wrong.
// Takes exponent from the user and calculates 2 ** exponent
int power2(int exponent) {
result = 2 ** exponent
return result
}
// Converts binary number to decimal
int binary2decimal(String binary) {
result = 0
count = 0
for (i = binary.length(); i-- > 0;) {
int d = Integer.parseInt(binary.charAt(i))
if (d == 1) {
result = result + power2(count)
}
count ++
}
return result
}
binary2decimal("101110")
Change
int d = Integer.parseInt(binary.charAt(i))
to
int d = Integer.parseInt("${binary[i]}")
and it will work.
An alternative implementation to yours is:
int binary2decimal2(String binary) {
binary.reverse()
.toList()
.indexed()
.collect { Integer idx, String val -> Integer.parseInt(val) * (2 ** idx)}.sum()
}
Assuming you want the shortest route to a clean solution, use:
Integer.parseInt(String base2num, int radix), where radix = 2.
See http://docs.oracle.com/javase/7/docs/api/java/lang/Integer.html#parseInt(java.lang.String,%20int)

Convert binary ( integer and fraction) from VHDL to decimal, negative value in C code

I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code
The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32. So I define this to be 16 bit data.
First, I need to check if the data is negative, if it is negative, I need to pad the first two MSB to be bit '1' so the system detects it as negative value instead of positive value.
Second, I need to compute the real value of this binary representation into a decimal value of BOTH integer and fraction.
I learned from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) that I could convert a binary (consists of both integer and fraction) to decimal values.
To be specified, I am able to use this code quoted from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) , reproduced as below:
#include <stdio.h>
#include <math.h>
double convert(const char binary[]){
int bi,i;
int len = 0;
int dot = -1;
double result = 0;
for(bi = 0; binary[bi] != '\0'; bi++){
if(binary[bi] == '.'){
dot = bi;
}
len++;
}
if(dot == -1)
dot=len;
for(i = dot; i >= 0 ; i--){
if (binary[i] == '1'){
result += (double) pow(2,(dot-i-1));
}
}
for(i=dot; binary[i] != '\0'; i++){
if (binary[i] == '1'){
result += 1.0/(double) pow(2.0,(double)(i-dot));
}
}
return result;
}
int main()
{
char bin[] = "1101.11";
char bin1[] = "1101";
char bin2[] = "1101.";
char bin3[] = ".11";
printf("%s -> %f\n",bin, convert(bin));
printf("%s -> %f\n",bin1, convert(bin1));
printf("%s -> %f\n",bin2, convert(bin2));
printf("%s -> %f\n",bin3, convert(bin3));
return 0;
}
I am wondering if this code can be used to check for negative value? I did try with a binary string of 11111101.11 and it gives the output of 253.75...
I have two questions:
What are the modifications I need to do in order to read a negative value?
I know that I can do the bit shift (as below) to check if the msb is 1, if it is 1, I know it is negative value...
if (14bit_data & 0x2000) //if true, it is negative value
The issue is, since it involves fraction part (but not only integer), it confused me a bit if the method still works...
If the binary number is originally not in string format, is there any way I could convert it to string? The binary number is originally fed from a fpga block written in VHDL say, 14 bits, with msb as the sign bit, the following 6 bits are the magnitude for integer and the last 6 bits are the magnitude for fractional part. I need the decimal value in C code for Altera Nios II processor.
OK so I m focusing on the fact that you want to reuse the algorithm you mention at the beginning of your question and assume that the binary representation you have for your signed number is Two's complement but I`m not really sure according to your comments that the input you have is the same than the one used by the algorithm
First pad the 2 MSB to have a 16 bit representation
16bit_data = (14_bit_data & 0x2000) ? ( 14_bit_data | 0xC000) : 14_bit_data ;
In case value is positive then value will remained unchanged and if negative this will be the correct two`s complement representation on 16bits.
For fractionnal part everything is the same compared to algorithm you mentionned in your question.
For integer part everything is the same except the treatment of MSB.
For unsigned number MSB (ie bit[15]) represents pow(2,15-6) ( 6 is the width of frationnal part ) whereas for signed number in Two`s complement representation it represents -pow(2,15-6) meaning that algorithm become
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
dec = dec + rem * pow(2, t) * (9 != t ? 1 : -1);
++t;
}
or said differently if you don`t want * operator
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
if( 9 != t)
{
dec = dec + rem * pow(2, t);
}
else
{
dec = dec - rem * pow(2, t);
}
++t;
}
For the second algorithm that you mention, considering you format if dot == 11 and i == 0 we are at MSB ( 10 integer bits followed by dot) so the code become
for(i = dot - 1; i >= 0 ; i--)
{
if (binary[i] == '1')
{
if(11 != dot || i)
{
result += (double) pow(2,(dot-i-1));
}
else
{
// result -= (double) pow(2,(dot-i-1));
// Due to your number format i == 0 and dot == 11 so
result -= 512
}
}
}
WARNING : in brice algorithm the input is character string like "11011.101" whereas according to your description you have an integer input so I`m not sure that this algorithm is suited to your case
I think this should work:
float convert14BitsToFloat(int16_t in)
{
/* Sign-extend in, since it is 14 bits */
if (in & 0x2000) in |= 0xC000;
/* convert to float with 6 decimal places (64 = 2^6) */
return (float)in / 64.0f;
}
To convert any number to string, I would use sprintf. Be aware it may significantly increase the size of your application. If you don't need the float and what to keep a small application, you should make your own conversion function.

Resources