Condition indexing numpy array of floats - python-3.x

import numpy as np
n = 10
xmin = 0
xmax = 1
dx = 1/n
x = np.arange(xmin-dx, xmax + 2*dx, dx)
print(x)
print(x <= 0.3)
The output of this code is following :
[-0.1 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. 1.1]
[ True True True True False False False False False False False False
False]
Why the element in array with value 0.3 is not smaller or equal than 0.3?
​I tried the same with other comparasions and i saw that -0.1 <= -0.1 and 0.1 <= 0.1 while 0.2 is not less or equal 0.2.
Really do not understand what is happening here.

I got it. Never compare float numbers, because of round-off error.
This will work:
print(x <= 0.3 + np.finfo(np.float64).eps)

Related

How to convert floating point number to ratio of integers in python with high accuracy? [duplicate]

This question already has answers here:
How to convert a decimal number into fraction?
(6 answers)
Is there a way to return a fully reduced ratio when calling .as_integer_ratio()?
(2 answers)
Is there an alternative to the: as_integer_ratio(), for getting "cleaner" fractions?
(2 answers)
Closed 1 year ago.
QUESTION:
I would like to convert floats into a ratio of integers in simplest form. (Not a duplicate of this question, see "EDIT" below). For example, 0.1 = 1, 10, 0.66666... = 2, 3, etc. In the code snippet below, I try doing this for x = 0.1, 0.2, ..., 1.0 using this default function; the method only works successfully for x = 0.5 and x = 1.0. Why does this algorithm fail for other values of x and what is a better method to do this? In case it is relevant, my use-case will be for dx ~ 0.0005 = x[1] - x[0] for 0.0005 < x 10.0.
CODE:
import numpy as np
f = np.vectorize(lambda x : x.as_integer_ratio())
x = np.arange(0.1, 1.1, 0.1)
nums, dens = f(x)
for xi, numerator, denominator in zip(x, nums, dens):
print("\n .. {} = {} / {}\n".format(xi, numerator, denominator))
OUTPUT:
.. 0.1 = 3602879701896397 / 36028797018963968
.. 0.2 = 3602879701896397 / 18014398509481984
.. 0.30000000000000004 = 1351079888211149 / 4503599627370496
.. 0.4 = 3602879701896397 / 9007199254740992
.. 0.5 = 1 / 2
.. 0.6 = 5404319552844595 / 9007199254740992
.. 0.7000000000000001 = 6305039478318695 / 9007199254740992
.. 0.8 = 3602879701896397 / 4503599627370496
.. 0.9 = 8106479329266893 / 9007199254740992
.. 1.0 = 1 / 1
EDIT:
This is not really a duplicate. Both methods of the accepted answer in the original question fail a basic use-case from my MWE. To show that the Fraction module gives the same error:
import numpy as np
from fractions import Fraction
f = np.vectorize(lambda x : Fraction(x))
x = np.arange(0.1, 1.1, 0.1)
y = f(x)
print(y)
## OUTPUT
[Fraction(3602879701896397, 36028797018963968)
Fraction(3602879701896397, 18014398509481984)
Fraction(1351079888211149, 4503599627370496)
Fraction(3602879701896397, 9007199254740992) Fraction(1, 2)
Fraction(5404319552844595, 9007199254740992)
Fraction(6305039478318695, 9007199254740992)
Fraction(3602879701896397, 4503599627370496)
Fraction(8106479329266893, 9007199254740992) Fraction(1, 1)]

Second argument of three mandatory

I have a function that mimics range(). I am stuck at one point. I need to be able to make the first (x) and third (step) arguments optional, but the middle
argument (y) mandatory. In the code below, everything works except the two commented out lines.
If I am only passing in one argument, how do I construct the function to accept the single passed in argument as the mandatory (y) argument?
I cannot do this: def float_range(x=0, y, step=1.0):
Non-default parameter cannot follow a default parameter.
def float_range(x, y, step=1.0):
if x < y:
while x < y:
yield x
x += step
else:
while x > y:
yield x
x += step
for n in float_range(0.5, 2.5, 0.5):
print(n)
print(list(float_range(3.5, 0, -1)))
for n in float_range(0.0, 3.0):
print(n)
# for n in float_range(3.0):
# print(n)
Output:
0.5
1.0
1.5
2.0
[3.5, 2.5, 1.5, 0.5]
0.0
1.0
2.0
You have to use sentinel values:
def float_range(value, end=None, step=1.0):
if end is None:
start, end = 0.0, value
else:
start = value
if start < end:
while start < end:
yield start
start += step
else:
while start > end:
yield start
start += step
for n in float_range(0.5, 2.5, 0.5):
print(n)
# 0.5
# 1.0
# 1.5
# 2.0
print(list(float_range(3.5, 0, -1)))
# [3.5, 2.5, 1.5, 0.5]
for n in float_range(0.0, 3.0):
print(n)
# 0.0
# 1.0
# 2.0
for n in float_range(3.0):
print(n)
# 0.0
# 1.0
# 2.0
By the way, numpy implements arange which is essentially what you are trying to reinvent, but it isn't a generator (it returns a numpy array)
import numpy
print(numpy.arange(0, 3, 0.5))
# [0. 0.5 1. 1.5 2. 2.5]

python scientific programming why blank occur when plotting with right results?

I want to plot the integration results as described below , but it seems turned out to be a blank sheet, what' s the reason ? please help me!
# -*- coding: utf-8 -*-
import matplotlib.pylab as plt
import numpy as np
import scipy as sp
from scipy.integrate import quad, dblquad, tplquad
x = np.arange(0,1,0.1)
print ("x = ", x)
def f(x):
return x
print ("f(x) = ", f(x))
x_lower = 0
for x_upper in x :
val, abserr = quad(f, x_lower, x_upper)
print ("integral value =", val, ", x_upper = ", x_upper ,", absolute error =", abserr)
plt.plot(x_upper, val, ' b--' )
plt.show()
The output, but plot is blank!
x = [ 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
f(x) = [ 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
integral value = 0.0 , x_upper = 0.0 , absolute error = 0.0
integral value = 0.005000000000000001 , x_upper = 0.1 , absolute error = 5.551115123125784e-17
integral value = 0.020000000000000004 , x_upper = 0.2 , absolute error = 2.2204460492503136e-16
integral value = 0.04500000000000001 , x_upper = 0.3 , absolute error = 4.996003610813205e-16
integral value = 0.08000000000000002 , x_upper = 0.4 , absolute error = 8.881784197001254e-16
integral value = 0.125 , x_upper = 0.5 , absolute error = 1.3877787807814457e-15
integral value = 0.18000000000000005 , x_upper = 0.6 , absolute error = 1.998401444325282e-15
integral value = 0.24500000000000005 , x_upper = 0.7 , absolute error = 2.720046410331634e-15
integral value = 0.32000000000000006 , x_upper = 0.8 , absolute error = 3.552713678800502e-15
integral value = 0.40499999999999997 , x_upper = 0.9 , absolute error = 4.496403249731884e-15
The reason you do not see anything in the plot is that you are plotting several line plots of only one single point. Since lines would need a start and and end (meaning at least two points) the graph stays blank.
The easiest way of showing your points would be to replace ' b--' in your call to plt.plot() by marker="o":
plt.plot(x_upper, val, marker="o", color="b")
A different option is to first collect all the integration results in a list, and then plot the complete list in a line plot:
import matplotlib.pylab as plt
import numpy as np
from scipy.integrate import quad
x = np.arange(0,1,0.1)
def f(x):
return x
x_lower = 0
vals = []
for x_upper in x :
val, abserr = quad(f, x_lower, x_upper)
vals.append(val)
plt.plot(x, vals, "b--")
plt.show()
I find the answer perhaps, for each time you plot only one dot, but not lines . If you set 'bs' ,then you find it really not a blank, but some dots.

Python Dictionary, make range associated with its values and return them

Say, I have a dictionary
D = {'A':0.25,'C':0.25,'G':0.25,'T':0.25}
The sum of dictionary values will always be one. I want to make range for each keys in D that would be as follows:
The first key : (0, D[FirstKey]).
The second key (SecondRangeParameterOfFirstKey, SecondRangeParameterOfFirstKey + ValueOfSecondKey) that is (0.25,0.50).
The third key would be as ( SecondRangeParameterOfSecondKey,SecondRangeParameterOfSecondKey + ValueOfSecondKey) that is (0.50,0.75).
The range for the fourth key would be (0.75,1).
One thing is last parameter of range for last key will always be 1 that is the summation of all values together.
I generate a random float between 0 and 1. I need to return the key following random float . So for example, for the given order of dictionary D, if I generate 0.63 then I have to return third key that is G because its range is (0.50,0.75). As dictionary is not ordered so I have to count range following dictionary's order and return that return the key following order of dictionary. So far I coded for this problem as following:
import random
def W(D):
vv = 0
f = 0
mer = ''
ran = random.uniform(0,1)
DI = D.items()
for k,v in DI:
mer = ''
if (ran >= f) and (ran < D[k]+vv):
mer = k
vv += v
return mer
My function never returns the third key when floats are generated that fall in third keys range that is (0.50,0.75), it returns fourth key instead.
Dicts are unordered so if you want maintain some order you will need an OrderedDict, this will find the key in the range based on increasing ranges on key order, this requires you to add the keys ordered by value:
from collections import OrderedDict
od = OrderedDict((('A', 0.25), ('C', 0.25), ('G',0.25), ('T', 0.25)))
def W(od):
ran = random.uniform(0, 1)
tot = 0
for k, v in od.items():
if tot <= ran < v + tot:
return k
tot += v
Adding a print(ran, v, tot + v) in the loop:
In [36]: W(od)
0.13237220509287917 0.25 0.25
Out[36]: 'A'
In [37]: W(od)
0.22239648741773488 0.25 0.25
Out[37]: 'A'
In [38]: W(od)
0.2798873944681526 0.25 0.25
0.2798873944681526 0.25 0.5
Out[38]: 'C'
In [39]: W(od)
0.05933372630532163 0.25 0.25
Out[39]: 'A'
In [40]: W(od)
0.776438095223963 0.25 0.25
0.776438095223963 0.25 0.5
0.776438095223963 0.25 0.75
0.776438095223963 0.25 1.0
Out[40]: 'T'
If the values are not all the same you will need to sort, you can use operator.itemgetter as the key to sort the items by value:
from operator import itemgetter
d = {'A': 0.35, 'C': 0.2, 'T': 0.3, 'G': 0.15}
def W(d):
ran = random.uniform(0, 1)
tot = 0
# sort from lowest value to highest
for k, v in sorted(d.items(),key=itemgetter(1)):
if tot <= ran < v + tot:
return k
tot += v
Adding a print again:
In [55]: W(d)
0.15 0.24005200696606188 0.15
0.2 0.24005200696606188 0.35
Out[55]: 'C'
In [56]: W(d)
0.15 0.9860872247496385 0.15
0.2 0.9860872247496385 0.35
0.3 0.9860872247496385 0.6499999999999999
0.35 0.9860872247496385 0.9999999999999999
Out[56]: 'A'
In [57]: W(d)
0.15 0.5690026436736583 0.15
0.2 0.5690026436736583 0.35
0.3 0.5690026436736583 0.6499999999999999
Out[57]: 'T'
In [58]: W(d)
0.15 0.28507671431234327 0.15
0.2 0.28507671431234327 0.35
Out[58]: 'C'

Loop through a decimal sequence

I am writing a loop in VBA for excel, and I would like to loop through a sequence of decimal numbers, rather than integers.
For example:
For i = 1 To 10
'Do something
Next i
But rather than incrementibg by 1, I would like to increment by 0.5 (or perhaps 5, or really any number other than 1).
Dim i as Single
For i = 1 To 10 Step 0.5
'
Next
But note you can get some unwanted numbers because of the floating numbers not being precise.
Sub a()
For i = 1 To 10 Step 0.1
Debug.Print i
Next i
End Sub
You can use a decimal generator.
def loop_decimal(loop_start_value, loop_stop_value, loop_step_value = 1):
# Input arguments error check
if not loop_step_value:
loop_step_value = 1
loop_start_value = decimal.Decimal(str(loop_start_value))
loop_step_value = decimal.Decimal(str(loop_step_value))
loop_stop_value = decimal.Decimal(str(loop_stop_value))
# Case: loop_step_value > 0
if loop_step_value > 0:
while loop_start_value < loop_stop_value:
yield loop_start_value
loop_start_value += loop_step_value
# Case: loop_step_value < 0
else:
while loop_start_value > loop_stop_value:
yield loop_start_value
loop_start_value += loop_step_value
Calling the generator produces:
for x in (loop_decimal(0.1, 0.9, 0.1)):
print(x)
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
To convert a float into an exact decimal value, use the str() in the decimal.Decimal() command.
decimal.Decimal(str( float_value ))
If you pass None or 0.0 to the step argument, the default value of 1 will be used.
This works for incrementing and decrementing loops.

Resources