fmt::format how to always print first n non-zero digits with leading zeroes - fmt

Given three floating point numbers
a = 1.0
b = 1.23456
c = 1.23456e-12
How can I always print the first n non-zero digits, with leading zeroes, if required. Desired output for n=6:
1.00000
1.23456
0.00000000000123456
I tried fmt::format("{:.6f"}", number) but this returns 0.00000 in the last case.

There is no built-in way of doing this but you could do it in several steps:
Format using the exponent notation (e).
Extract the exponent.
Format using the fixed notation (f) adjusting the precision to account for the exponent.
For example:
#include <fmt/core.h>
void print(double n) {
auto s = fmt::format("{:e}", n);
auto exp = atoi(s.substr(s.find('e') + 1).c_str());
int precision = 6;
if (exp < 0) precision -= exp;
fmt::print("{:.{}f}\n", n, precision - 1);
}
int main() {
print(1.0);
print(1.23456);
print(1.23456e-12);
}
This prints:
1.00000
1.23456
0.00000000000123456
https://godbolt.org/z/16n8s7TG3

Related

Signed float to hexadecimal number

How to convert float to a specific format in hexadecimal:
1 bit for sign, 15 bit for the integer value, and the rest 16 bits for values after the decimal point.
Example output should be ffff587a for -0.6543861, fff31a35 for -12.897631, 006bde10 for 107.8674316, 003bd030 for 59.8132324
I have written a program that can do the unsigned conversion, I am stuck at the signed part. Could anyone guide me on how I can achieve this in a very compact way?
def convert(num):
binary2 = ""
Int = int(num)
fract = num - Int
binary = '{:16b}'.format(Int & 0b1111111111111111)
for i in range (16):
fract *= 2
fract_bit = int(fract)
if fract_bit == 1:
fract -= fract_bit
binary2 += '1'
else:
binary2 += '0'
return int(binary + binary2, 2)
value = 107.867431640625
x = convert(value)
hex(x)
output: 0x6bde10
This is simply the Q16.16 fixed-point format. To convert a floating-point number to this format, simply multiply it by 216 (in Python, 1<<16 or 65536) and convert the product to an integer:
y = int(x * (1<<16))
To show its 32-bit two’s complement representation, add 232 if it is negative and then convert it to hexadecimal:
y = hex(y + (1<<32 if y < 0 else 0))
For example, the following prints “0xfff31a35”:
#!/usr/bin/python
x=-12.897631
y = int(x * (1<<16))
y = hex(y + (1<<32 if y < 0 else 0))
print(y)
This conversion truncates. If you want rounding, you can add .5 inside the int or you can add additional code for other types of rounding. You may also want to add code to handle overflows.

find the number of ways you can form a string on size N, given an unlimited number of 0s and 1s

The below question was asked in the atlassian company online test ,I don't have test cases , this is the below question I took from this link
find the number of ways you can form a string on size N, given an unlimited number of 0s and 1s. But
you cannot have D number of consecutive 0s and T number of consecutive 1s. N, D, T were given as inputs,
Please help me on this problem,any approach how to proceed with it
My approach for the above question is simply I applied recursion and tried for all possiblity and then I memoized it using hash map
But it seems to me there must be some combinatoric approach that can do this question in less time and space? for debugging purposes I am also printing the strings generated during recursion, if there is flaw in my approach please do tell me
#include <bits/stdc++.h>
using namespace std;
unordered_map<string,int>dp;
int recurse(int d,int t,int n,int oldd,int oldt,string s)
{
if(d<=0)
return 0;
if(t<=0)
return 0;
cout<<s<<"\n";
if(n==0&&d>0&&t>0)
return 1;
string h=to_string(d)+" "+to_string(t)+" "+to_string(n);
if(dp.find(h)!=dp.end())
return dp[h];
int ans=0;
ans+=recurse(d-1,oldt,n-1,oldd,oldt,s+'0')+recurse(oldd,t-1,n-1,oldd,oldt,s+'1');
return dp[h]=ans;
}
int main()
{
int n,d,t;
cin>>n>>d>>t;
dp.clear();
cout<<recurse(d,t,n,d,t,"")<<"\n";
return 0;
}
You are right, instead of generating strings, it is worth to consider combinatoric approach using dynamic programming (a kind of).
"Good" sequence of length K might end with 1..D-1 zeros or 1..T-1 of ones.
To make a good sequence of length K+1, you can add zero to all sequences except for D-1, and get 2..D-1 zeros for the first kind of precursors and 1 zero for the second kind
Similarly you can add one to all sequences of the first kind, and to all sequences of the second kind except for T-1, and get 1 one for the first kind of precursors and 2..T-1 ones for the second kind
Make two tables
Zeros[N][D] and Ones[N][T]
Fill the first row with zero counts, except for Zeros[1][1] = 1, Ones[1][1] = 1
Fill row by row using the rules above.
Zeros[K][1] = Sum(Ones[K-1][C=1..T-1])
for C in 2..D-1:
Zeros[K][C] = Zeros[K-1][C-1]
Ones[K][1] = Sum(Zeros[K-1][C=1..T-1])
for C in 2..T-1:
Ones[K][C] = Ones[K-1][C-1]
Result is sum of the last row in both tables.
Also note that you really need only two active rows of the table, so you can optimize size to Zeros[2][D] after debugging.
This can be solved using dynamic programming. I'll give a recursive solution to the same. It'll be similar to generating a binary string.
States will be:
i: The ith character that we need to insert to the string.
cnt: The number of consecutive characters before i
bit: The character which was repeated cnt times before i. Value of bit will be either 0 or 1.
Base case will: Return 1, when we reach n since we are starting from 0 and ending at n-1.
Define the size of dp array accordingly. The time complexity will be 2 x N x max(D,T)
#include<bits/stdc++.h>
using namespace std;
int dp[1000][1000][2];
int n, d, t;
int count(int i, int cnt, int bit) {
if (i == n) {
return 1;
}
int &ans = dp[i][cnt][bit];
if (ans != -1) return ans;
ans = 0;
if (bit == 0) {
ans += count(i+1, 1, 1);
if (cnt != d - 1) {
ans += count(i+1, cnt + 1, 0);
}
} else {
// bit == 1
ans += count(i+1, 1, 0);
if (cnt != t-1) {
ans += count(i+1, cnt + 1, 1);
}
}
return ans;
}
signed main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
cin >> n >> d >> t;
memset(dp, -1, sizeof dp);
cout << count(0, 0, 0);
return 0;
}

Method to convert binary number to decimal

I tried to write a program that converts a binary number to a decimal but am getting a number of errors and I cannot figure out where I am going wrong.
// Takes exponent from the user and calculates 2 ** exponent
int power2(int exponent) {
result = 2 ** exponent
return result
}
// Converts binary number to decimal
int binary2decimal(String binary) {
result = 0
count = 0
for (i = binary.length(); i-- > 0;) {
int d = Integer.parseInt(binary.charAt(i))
if (d == 1) {
result = result + power2(count)
}
count ++
}
return result
}
binary2decimal("101110")
Change
int d = Integer.parseInt(binary.charAt(i))
to
int d = Integer.parseInt("${binary[i]}")
and it will work.
An alternative implementation to yours is:
int binary2decimal2(String binary) {
binary.reverse()
.toList()
.indexed()
.collect { Integer idx, String val -> Integer.parseInt(val) * (2 ** idx)}.sum()
}
Assuming you want the shortest route to a clean solution, use:
Integer.parseInt(String base2num, int radix), where radix = 2.
See http://docs.oracle.com/javase/7/docs/api/java/lang/Integer.html#parseInt(java.lang.String,%20int)

Convert binary ( integer and fraction) from VHDL to decimal, negative value in C code

I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code
The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32. So I define this to be 16 bit data.
First, I need to check if the data is negative, if it is negative, I need to pad the first two MSB to be bit '1' so the system detects it as negative value instead of positive value.
Second, I need to compute the real value of this binary representation into a decimal value of BOTH integer and fraction.
I learned from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) that I could convert a binary (consists of both integer and fraction) to decimal values.
To be specified, I am able to use this code quoted from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) , reproduced as below:
#include <stdio.h>
#include <math.h>
double convert(const char binary[]){
int bi,i;
int len = 0;
int dot = -1;
double result = 0;
for(bi = 0; binary[bi] != '\0'; bi++){
if(binary[bi] == '.'){
dot = bi;
}
len++;
}
if(dot == -1)
dot=len;
for(i = dot; i >= 0 ; i--){
if (binary[i] == '1'){
result += (double) pow(2,(dot-i-1));
}
}
for(i=dot; binary[i] != '\0'; i++){
if (binary[i] == '1'){
result += 1.0/(double) pow(2.0,(double)(i-dot));
}
}
return result;
}
int main()
{
char bin[] = "1101.11";
char bin1[] = "1101";
char bin2[] = "1101.";
char bin3[] = ".11";
printf("%s -> %f\n",bin, convert(bin));
printf("%s -> %f\n",bin1, convert(bin1));
printf("%s -> %f\n",bin2, convert(bin2));
printf("%s -> %f\n",bin3, convert(bin3));
return 0;
}
I am wondering if this code can be used to check for negative value? I did try with a binary string of 11111101.11 and it gives the output of 253.75...
I have two questions:
What are the modifications I need to do in order to read a negative value?
I know that I can do the bit shift (as below) to check if the msb is 1, if it is 1, I know it is negative value...
if (14bit_data & 0x2000) //if true, it is negative value
The issue is, since it involves fraction part (but not only integer), it confused me a bit if the method still works...
If the binary number is originally not in string format, is there any way I could convert it to string? The binary number is originally fed from a fpga block written in VHDL say, 14 bits, with msb as the sign bit, the following 6 bits are the magnitude for integer and the last 6 bits are the magnitude for fractional part. I need the decimal value in C code for Altera Nios II processor.
OK so I m focusing on the fact that you want to reuse the algorithm you mention at the beginning of your question and assume that the binary representation you have for your signed number is Two's complement but I`m not really sure according to your comments that the input you have is the same than the one used by the algorithm
First pad the 2 MSB to have a 16 bit representation
16bit_data = (14_bit_data & 0x2000) ? ( 14_bit_data | 0xC000) : 14_bit_data ;
In case value is positive then value will remained unchanged and if negative this will be the correct two`s complement representation on 16bits.
For fractionnal part everything is the same compared to algorithm you mentionned in your question.
For integer part everything is the same except the treatment of MSB.
For unsigned number MSB (ie bit[15]) represents pow(2,15-6) ( 6 is the width of frationnal part ) whereas for signed number in Two`s complement representation it represents -pow(2,15-6) meaning that algorithm become
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
dec = dec + rem * pow(2, t) * (9 != t ? 1 : -1);
++t;
}
or said differently if you don`t want * operator
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
if( 9 != t)
{
dec = dec + rem * pow(2, t);
}
else
{
dec = dec - rem * pow(2, t);
}
++t;
}
For the second algorithm that you mention, considering you format if dot == 11 and i == 0 we are at MSB ( 10 integer bits followed by dot) so the code become
for(i = dot - 1; i >= 0 ; i--)
{
if (binary[i] == '1')
{
if(11 != dot || i)
{
result += (double) pow(2,(dot-i-1));
}
else
{
// result -= (double) pow(2,(dot-i-1));
// Due to your number format i == 0 and dot == 11 so
result -= 512
}
}
}
WARNING : in brice algorithm the input is character string like "11011.101" whereas according to your description you have an integer input so I`m not sure that this algorithm is suited to your case
I think this should work:
float convert14BitsToFloat(int16_t in)
{
/* Sign-extend in, since it is 14 bits */
if (in & 0x2000) in |= 0xC000;
/* convert to float with 6 decimal places (64 = 2^6) */
return (float)in / 64.0f;
}
To convert any number to string, I would use sprintf. Be aware it may significantly increase the size of your application. If you don't need the float and what to keep a small application, you should make your own conversion function.

Converting decimal to specific scientific notation format

I'm trying to convert decimal to a specific scientific notation format:
-2.802479940 ==> -.2802479940E+01
3.796137791 ==> 0.3796137791E+01
And so on. Basically, No leading zero on negative numbers and a leading zero
on positive numbers. I've been able to print out in the big E scientific notation
but have no idea how to force the leading part to be a '-' or a '0' only.
This is kind of ugly (there aren't any built-in string-formatting options for it as far as I know), but it should do what you want it to do:
import re
nums = [-2.802479940, 3.796137791, -0.012, 0.035, 0, 100, -200]
decimal_digits = 10
for n in nums:
num = ('{:.'+str(decimal_digits-1)+'E}').format(n)
# increment exponent
exp_search = re.search(r'E(\+|-)\d+', num).group()
sign = exp_search[1]
exp = int(exp_search[1:])
new_exp = exp + 1
exponent = 'E'+sign+'{:02}'.format(abs(new_exp))
# move decimal place over
sig_search = re.search(r'-?\d\.\d+', num).group()
stripped = sig_search.replace('.', '')
if stripped[0] == '-':
significand = stripped.replace('-', '-.')
else:
significand = '0.' + stripped
# glue significand and exponent back together
new_num = significand + exponent
print('{:>20}{:>20}'.format(num, new_num))
assert float(num) == float(new_num)
Output:
-2.802479940E+00 -.2802479940E+01
3.796137791E+00 0.3796137791E+01
-1.200000000E-02 -.1200000000E-01
3.500000000E-02 0.3500000000E-01
0.000000000E+00 0.0000000000E+01
1.000000000E+02 0.1000000000E+03
-2.000000000E+02 -.2000000000E+03
This doesn't perform any arithmetic, just string operations, so it shouldn't introduce any issues with floating-point representations or whatever.
Here's C code to do what you want:
void numfmt(double d, char* buf) {
int exp=0;
int cont=1;
char sign = '0';
if (d < 0) {
sign = '-';
d = -d;
}
while(cont) {
if (d >= 1.0) {
exp ++;
d /= 10.0;
} else if (d < 0.1) {
exp --;
d *= 10.0;
} else
cont = 0;
}
sprintf(buf,"%12.10fE%+03d",d,exp);
*buf = sign;
}
To convert any number into scientific notation you need to know two things, the exponent and the decimal part. like this: Number = Dec E exponent (or Number = Dec * 10^exponent)
Log10() function will be really important then, cause it will give you the exact exponent of Number! Then to get the exponent you have this relation:
exponent = log10(Number/Dec)
but not knowing the Dec part, we will simplify into this relation:
exponent = log10(Number)
this will not give the integer exponent, only the exponent for which 1/10^exponent = Number. To get the right integer exponent you will need to get the first lowest integer with the floor() function.
That's why you'll need to change the relation to:
exponent = int(floor(log10(Number))) # using int will convert any floating value into an integer exponent
Then using this exponent you could easily find the decimal part with the relation:
Dec = Number / 10^exponent
Now you have a way to change any number* into scientific notation, if you need to have a leading zero, you then just have to decrement the exponent ^^
* Log(x) is defined on R*+ so if you have a negative number you must convert it into a positive one before passing it as an argument to log10(), and if the number is 0, then you should make an exception (return dec = 0 and exponent = 0)
this is an example of code in python:
Decompose(originalvalue):
calcul = originalvalue;
if (originalvalue != 0.0):
if (originalvalue < 0):
calcul = -calcul;
exponent = int(floor(log10(calcul))); # get the integer exponent
value = originalvalue / 10.0**exponent; # get scientific decimal part
# then to have a leading zero, you just have to verifiy if dec >= 1 or not, and in this case, modify the value and exponent to eliminate it.
if (value >= 1):
value *= 10**-1
exponent += 1
else:
exponent = 0;
value = 0.0;
return [value, exponent];
Here is a solution I came up with, having the same problem.
def converter(number):
number_str = '{:.14e}'.format(number)
number_split = number_str.split('.')
int_part = number_split[0]
exp_part = number_split[1][-3:]
dec_part = number_split[1][:-6]
int_part_numeric = int(int_part)
if number == 0:
return '0.000000000000E+00'
if int_part_numeric == 0:
sign = '0'
new_exp_part = exp_part
new_dec_part = dec_part
elif int_part_numeric < 0:
sign = '-'
new_exp_part = '{:+03d}'.format(int(exp_part) + 1)
new_dec_part = int_part.strip('-') + dec_part[:-1]
elif int_part_numeric > 0:
sign = '0'
new_exp_part = '{:+03d}'.format(int(exp_part) + 1)
new_dec_part = int_part + dec_part[:-1]
return sign + '.' + new_dec_part + 'E' + new_exp_part
It could probably be simplified a bit though...

Resources