I want to use random.uniform to generate a float in between say [-2, 2], but not generate 0, this is how I do it in a loop,
from random import uniform
flag = True
while flag:
if uniform(-2, 2) is not 0:
flag = False
I am wondering is there better way to do it?
cheers
This is more something for Code Review, but very briefly:
from random import uniform
while True:
if uniform(-2, 2) != 0.0:
break
is probably the more Pythonic / standard way to do this (standard, as in that this pattern occurs in other languages as well).
It's rare that a flag variable is necessary to break out of a (while) loop. Perhaps when using a double loop.
Note: I changed your is not to !=, and your 0 to 0.0 (the latter is more so that it's clear we're comparing a float to a float).
Because you're comparing a float to an int, so they'll never be the same item. Besides, comparing numbers using is is a bad idea:
>>> 2*3 is 6 # this may work, but don't rely on it
True
>>> 10*60 is 600 # this obviously doesn't work
False
>>> 0 is 0 # sure, this works...
True
>>> 0.0 is 0 # but this doesn't: float vs int
False
Of course, to answer the actual question if there are other ways to generate those random numbers: probably a dozen.
With a list comprehension inside a list comprehension*:
[val for val in [uniform(-2, 2) for i in range(10)] if val != 0]
Using numpy:
vals = uniform(-2, 2, 10)
vals = vals[vals!=0]
* I don't want to call it nested, since I feel that belongs to a slightly different double list comprehension.
Related
nums=[0,5,4,12]
n=len(nums)
temp=1
ans=[]
for i in range(n):
ans.append(temp)
temp*=nums[i]
temp=1
for i in range(n-1,-1):
ans[i]*=temp
temp*=nums[i]
print("yes")
print(ans)
Given an integer array nums, return an array answer such that answer[i] is equal to the product of all the elements of nums except nums[i].
The product of any prefix or suffix of nums is guaranteed to fit in a 32-bit integer.
You must write an algorithm that runs in O(n) time and without using the division operation.
This is a solution for this leetcode question but my second for loop is not executing, and i don't know why.
Using range like this will result in zero iterations because the "step" parameter is 1. So because it is 1 it will think it should go upwards, but n is already above -1, so it should be like
range(n, -1, -1)
also you most probably want
import math
n = [0, 5, 4, 12]
ans = []
for num in n:
temp = n[:] # create a copy
temp.remove(num) # remove the number you are on
ans.append(math.prod(temp)) # use math.prod to multiply the rest together
return ans
Could you give me a hint where the time consuming part of this code is?
It's my temporary solutions for the kata Generate Numbers from Digits #2 from codewars.com.
Thanks!
from collections import Counter
from itertools import permutations
def proc_arrII(arr):
length = Counter(arr).most_common()[-1][1]
b = [''.join(x) for x in list(set(permutations(arr,length)))]
max_count = [max(Counter(x).values()) for x in b]
total = 0
total_rep = 0
maximum_pandigit = 0
for i in range(len(b)):
total+=1
if max_count[i] > 1:
total_rep+=1
elif int(b[i]) > maximum_pandigit:
maximum_pandigit = int(b[i])
if maximum_pandigit == 0:
return([total])
else:
return([total,total_rep,maximum_pandigit])
When posting this,
it would have been helpful to offer example input,
or link to the original question,
or include some python -m cProfile output.
Here is a minor item, it inflates the running time very very slightly.
In the expression [''.join(x) for x in list(set(permutations(arr, length)))]
there's no need to call list( ... ).
The join just needs an iterable, and a set works fine for that.
Here is a bigger item.
permutations already makes the promise that
"if the input elements are unique, there will be no repeat values in each permutation."
Seems like you want to dedup (with set( ... )) on the way in,
rather than on the way out,
for an algorithmic win -- reduced complexity.
The rest looks nice enough.
You might try benching without the elif clause,
using the expression max(map(int, b)) instead.
If there's any gain it would only be minor,
turning O(n) into O(n) with slightly smaller coefficient.
Similarly, you should just assign total = len(b) and be done with it,
no need to increment it that many times.
Take a very large list such that for any number of reasons it and all the rest to come does not fit in available memory, here: A = [ 2, -3, 10, 0.2]
Map the sign of its components: sign_A = list(map(lambda u: (abs(u)==u), A))
You get [True, False, True, True]
Do some logic where you need to operate on abs_A = [abs(e) for e in A]. So you flush A and you keep working with sign_A and abs_A. The logic yields components' indices of interest, say i, k, ... for the list abs_A.
The problem I have is when using the ternary operator (falsevalue, truevalue)[condition] to do some algebra on the signed components of A, e.g.:
abs_A[i]*(-1, 1)[sign_A[i]] + abs_A[k]**(-1, 1)[sign_A[k]]
# equivalently, can use:
# abs_A[i]*(-1, 1)[np.bool(sign_A[i])] + abs_A[k]**(-1, 1)[np.bool(sign_A[k])]
I get this warning:
DeprecationWarning: In future, it will be an error for 'np.bool_'
scalars to be interpreted as an index.
The warning indirectly tells me that there is probably a better, more "pythonesque" way than my snippet to do this. I found relevant posts (e.g. here and here) but no suggestion as to how I should deal with it. Pointers anyone ?
With lists, the boolean indexing works fine:
In [21]: A = [ 2, -3, 10, 0.2]
In [22]: sign_A = list(map(lambda u: (abs(u)==u), A))
In [23]: abs_A = [abs(e) for e in A]
In [24]: i=0; k=1
In [25]: abs_A[i]*(-1, 1)[sign_A[i]] + abs_A[k]**(-1, 1)[sign_A[k]]
Out[25]: 2.3333333333333335
We do get the warning if we try to index with a numpy boolean:
In [26]: abs_A[i]*(-1, 1)[np.array(sign_A)[i]]
/usr/local/bin/ipython3:1: DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index
#!/usr/bin/python3
Out[26]: 2
We can get around that by making the sign_A array integer right from the start:
In [27]: abs_A[i]*(-1, 1)[np.array(sign_A,dtype=int)[i]]
Out[27]: 2
If we start with an array:
In [28]: B = np.array(A)
the sign array - using where to map directly onto (-1,1) space
In [30]: sign_B = np.where(B>=0,1,-1)
In [31]: sign_B
Out[31]: array([ 1, -1, 1, 1])
the abs array:
In [32]: abs_B = np.abs(B)
the recreated array:
In [33]: abs_B*sign_B
Out[33]: array([ 2. , -3. , 10. , 0.2])
To avoid the warning, replace np.bool() with int():
abs_A[i]*(-1, 1)[int(sign_A[i])] + ...
An easy solution here is to use the syntax truevalue if condition else falsevalue in place of (falsevalue, truevalue)[condition].
Expanding on the answer by DYZ from Mar 15 '19:
In this case remapping the sign function from boolean to integer led to the use of np.bool triggering
DeprecationWarning: In future, it will be an error for 'np.bool_'
scalars to be interpreted as an index.
Using int() instead resolved the issue.
A wrapper approach is less readable but would have also worked: int(np.bool(sign_A[i]))
In the more general case of numpy bitwise logical operators the same warning is triggered by e.g., by using an inequality check as an index:
result = X[np.less_equal(a, b)]
X could hold items of a type other than integer.
A suitable solution if X contained float items is:
result = float(X[np.less_equal(a, b)])
Alternatively,
result = X[np.less_equal(a, b)]
return = float(result)
can be used with a function definition.
The latter return form is the way I resolved a Warning triggered today within a function definition. I was guided by DYZ's answer.
I have written a Sieve of Eratosthenes--I think--but it seems like it's not as optimized as it could be. It works, and it gets all the primes up to N, but not as quickly as I'd hoped. I'm still learning Python--coming from two years of Java--so if something isn't particularly Pythonic then I apologize:
def sieve(self):
is_prime = [False, False, True, True] + [False, True] * ((self.lim - 4) // 2)
for i in range(3, self.lim, 2):
if i**2 > self.lim: break
if is_prime[i]:
for j in range(i * i, self.lim, i * 2):
is_prime[j] = False
return is_prime
I've looked at other questions similar to this one but I can't figure out how some of the more complicated optimizations would fit in to my code. Any suggestions?
EDIT: as requested, some of the other optimizations I've seen are stopping the iteration of the first for loop before the limit, and skipping by different numbers--which I think is wheel optimization?
EDIT 2: Here's the code that would utilize the method, for Padraic:
primes = sieve.sieve()
for i in range(0, len(primes)):
if primes[i]:
print("{:d} ".format(i), end = '')
print() # print a newline
a slightly different approach: use a bitarray to represent the odd numbers 3,5,7,... saving some space compared to a list of booleans.
this may save some space only and not help speedup...
from bitarray import bitarray
def index_to_number(i): return 2*i+3
def number_to_index(n): return (n-3)//2
LIMIT_NUMBER = 50
LIMIT_INDEX = number_to_index(LIMIT_NUMBER)+1
odd_primes = bitarray(LIMIT_INDEX)
# index 0 1 2 3
# number 3 5 7 9
odd_primes.setall(True)
for i in range(LIMIT_INDEX):
if odd_primes[i] is False:
continue
n = index_to_number(i)
for m in range(n**2, LIMIT_NUMBER, 2*n):
odd_primes[number_to_index(m)] = False
primes = [index_to_number(i) for i in range(LIMIT_INDEX)
if odd_primes[i] is True]
primes.insert(0,2)
print('primes: ', primes)
the same idea again; but this time let bitarray handle the inner loop using slice assignment. this may be faster.
for i in range(LIMIT_INDEX):
if odd_primes[i] is False:
continue
odd_primes[2*i**2 + 6*i + 3:LIMIT_INDEX:2*i+3] = False
(none of this code has been seriously checked! use with care)
in case you are looking for a primes generator based on a different method (wheel factorizaition) have a look at this excellent answer.
I came across this code and it works, but I am not entirely sure about when to use ast and whether there are performance issues when this is used instead of getting the string value from input() and converting it to int.
import ast
cyper_key = ast.literal_eval(input("Enter the key (a value between 0 and 25) : "))
# this get the user input as an int to the variable cyper_key
I read the docs I understand what it does.
This can be used for safely evaluating strings containing Python
values from untrusted sources without the need to parse the values
oneself. It is not capable of evaluating arbitrarily complex
expressions, for example involving operators or indexing.
I am looking for an explanation on above bold points.
When to use it.
ast.literal_eval(input()) would be useful if you expected a list (or something similar) by the user. For example '[1,2]' would be converted to [1,2].
If the user is supposed to provide a number ast.literal_eval(input()) can be replaced with float(input()), or int(input()) if an integer is expected.
Performance
Note that premature [micro-]optimization is the root of all evil. But since you asked:
To test the speed of ast.literal_eval(input()) and float(input() you can use timeit.
Timing will vary based on the input given by the user.
Ints and floats are valid input, while anything else would be invalid. Giving 50% ints, 40% floats and 10% random as input, float(input()) is x12 faster.
With 10%, 10%, 80% and float(input()) is x6 faster.
import timeit as tt
lst_size = 10**5
# Set the percentages of input tried by user.
percentages = {'ints': .10,
'floats': .10,
'strings': .80}
assert 1 - sum(percentages.values()) < 0.00000001
ints_floats_strings = {k: int(v*lst_size) for k, v in percentages.items()}
setup = """
import ast
def f(x):
try:
float(x)
except:
pass
def g(x):
try:
ast.literal_eval(x)
except:
pass
l = [str(i) for i in range({ints})]
l += [str(float(i)) for i in range({floats})]
l += [']9' for _ in range({strings}//2)] + ['a' for _ in range({strings}//2)]
""".format(**ints_floats_strings)
stmt1 = """
for i in l:
f(i)
"""
stmt2 = """
for i in l:
g(i)
"""
reps = 10**1
t1 = tt.timeit(stmt1, setup, number=reps)
t2 = tt.timeit(stmt2, setup, number=reps)
print(t1)
print(t2)
print(t2/t1)
ast -> Abstract Syntax Trees
ast.literal_eval raises an exception if the input isn't a valid Python datatype, so the code won't be executed if it's not.
This link AST is useful for you to understand ast.
If it's going to be used as an int, then just use:
cypher_key = int(input("Enter the key (a value between 0 and 25) : "))
Only use that if you expect the user to be entering 10e7 or something. If you want to handle different bases, you can use int(input(...), 0) to automatically divine the base. If it really is an integer value between 0 and 25, there's no reason to use ast.
Running this in a python-3.x shell, I get no differences when I give correct input:
>>> cyper_key = ast.literal_eval(input("Enter the key (a value between 0 and 25) : "))
Enter the key (a value between 0 and 25) : 5
>>> cyper_key
5
However, when you give a string or something that cannot be converted, the error can be confusing and/or misleading:
>>> cyper_key = ast.literal_eval(input("Enter the key (a value between 0 and 25) : "))
Enter the key (a value between 0 and 25) : foo
Traceback (most recent call last):
File "python", line 3, in <module>
ValueError: malformed node or string: <_ast.Name object at 0x136c968>
However, this can be useful if you don't want to cast either float or int to your input, which may lead to ValueErrors for your int or floating points for your float.
Thus, I see no necessary use in using ast to parse your input, but it can work as an alternate.