how to call a function a specific no. of times in python - python-3.x

I am building a python module which will take the two numbers.Then it will find the random numbers between the two numbers for specific number of times and then add all of them a single variable. Then I will use the // operator to find the nearest and least whole number(integer in python). I have just made this much of code:
import random
def randint(minno, maxno, nooftime):
Here the minno is the least no and maxno is the maximum no and nooftime will be the no of times the random numbers will be generated and added to common variable(a , to be specific)
then i will divide a by the nooftime by using this equation (a//nooftime) and then I will print the base quotient
this module will be used for gaming purposes such as generating positions for an enemy to appear and for random map generaion

This basically re-implements random.randint:
from random import random
def randint(minno, maxno, nooftime):
a = sum([random() * (maxno-minno) + minno for _ in range(nooftime)])
return a // nooftime

Related

Algorithm, that finds the k-greatest number in O(n*log(k))

was wondering, if you have given an unsorted list of arrays of any length n >= k,
what is your idea, to find the k-greatest number in O(n*log(k)) time. So the k = 2 -greatest number of an Array containing the numbers 1 to 9 would be 8 for example.
I'm trying to code this in python, if you have an idea how in that time complexity :)
My answer is not python-specific, however you should be able to implement the used concepts in python, or find libraries already implementing them.
The basic idea is to iterate over the list and store the current greatest, second greatest, ... , k-greatest number in a separate data structure. Since you will be iterating over all n entries in your array, the complexity of this is in O(n * insertion_step_complexity)
As seen above, the insertion step needs to not exceed a complexity of O(log(k)) to achieve this you can use a AVL-Tree that has a complexity of O(log(m)) for inserting and deleting items, where m is the number of items stored within the avl-tree.
An algorithm would look like this:
def find_k_greatest_number(k, array):
avl_tree = initialize AVL tree here
avl_items = 0
for number in array:
if (number > avl_tree.smallest_number()):
if (avl_itmes >= k):
avl_tree.delete_smallest_number()
else:
avl_items++
avl_tree.insert(number)
return avl_tree.smallest_number()
Finding the smallest number in a sorted tree is dependent on its height. Since the AVL tree can't exceed the height of log(k) the complexity of finding the smallest number is O(log(k)).

Is there any way to make this code more efficient?

I have to write a code to calculate the number of elements that have the maximum number of divisors between any 2 given numbers (A[0], A[1])(inclusive of both). I have to take input in the form of a line separated with spaces. The first line of the input gives the number of cases present in an example. This code is working perfectly fine but is taking some time to execute. Can anyone please help me write this code more efficiently?
import numpy as np
from sys import stdin
t=input()
for i in range(int(t)):
if int(t)<=100 and int(t)>=1:
divisor=[]
A=list(map(int,stdin.readline().split(' ')))
def divisors(n):
count=0
for k in range(1,int(n/2)+1):
if n%k==0:
count+=1
return count
for j in np.arange(A[0],A[1]+1):
divisor.append(divisors(j))
print(divisor.count(max(divisor)))
Sample input:
2
2 9
1 10
Sample Output:
3
4
There is a way to calculate divisors from the prime factorisation of a number.
Given the prime factorisation, calculating divisors is faster than trial division (which you do here).
But prime factorisation has to be fast. For small numbers having a pre-calculated list of prime numbers (easy to do) can make prime factorisation fast and divisor calculation fast as well. If you konw the upper limit of the numbers you test (let's call it L), then you need the prime numbers up to sqrt(L). Given the prime factorisation of a number n = p_1^e_1 * p_2^e_2 * .. * p_k^e_k the number of divisors is simply (1+e_1) * (1+e_2) * .. * (1+e_k)
Even more, you can pre-calculate and/or memoize the num of divisors of some overused numbers up to some limit. This will save a lot of time but increase memory, else you can calculate it directly (for example using previous method).
Apart from that, you can optimise the code a bit. For example you can avoid doing int(t) casting (and similar) all the time, do it once and store it in a variable.
Numpy may be avoided all together, it is superflous and I doubt adds any speed advantage, depends.
That should make your code faster, but always need to measure performance by real tests.

Solving math with integers larger than any available integer data type

In some programming competitions where the numbers are larger than any available integer data type, we often use strings instead.
Question 1:
Given these large numbers, how to calculate e and f in the below expression?
(a/b) + (c/d) = e/f
note: GCD(e,f) = 1, i.e. they must be in minimised form. For example {e,f} = {1,2} rather than {2,4}.
Also, all a,b,c,d are large numbers known to us.
Question 2:
Can someone also suggest a way to find GCD of two big numbers (bigger than any available integer type)?
I would suggest using full bytes or words rather than strings.
It is relatively easy to think in base 256 instead of base 10 and a lot more efficient for the processor to not do multiplication and division by 10 all the time. Ideally, choose a word size that is half the processor's natural word size, as that makes carry easy to implement. Of course thinking in base 64K or 4G is slightly more complex, but even better than base 256.
The only downside is generating the initial big numbers from the ascii input, which you get for free in base 10. Using a larger word size you can make this more efficient by processing a number of digits initially into a single word (eg 9 digits at a time into 4G), then performing a long multiply of that single word into the correct offset in your large integer format.
A compromise might be to run your engine in base 1 billion: This will still be 9 or 81 times more efficient than using base 10!
The simplest way to solve this equation is to multiply a/b * d/d and c/d * b/b so they both have the common denominator b*d.
I think you will then need to prime factorise your big numbers e and f to find any common factors. Remember to search again for the same factor squared.
Of course, that means you have to write a prime generating sieve. You only need to generate factors up to the square root, or half the digits of the min value of e and f.
You could prime factorise b and d to get a lower initial denominator, but you will need to do it again anyway after the addition.
I think that the way to solve this is to separate the problem:
Process the input numbers as an array of characters (ie. std::string)
Make a class where each object can store an std::list (or similar) that represents one of the large numbers, and can do the needed arithmetic with your data
You can then solve your problems normally, without having to worry about your large inputs causing overflow.
Here's a webpage that explains how you can have such an arithmetic class (with sample code in C++ showing addition).
Once you have such an arithmetic class, you no longer need to worry about how to store the data or any overflow.
I get the impression that you already know how to find the GCD when you don't have overflow issues, but just in case, here's an explanation of finding the GCD (with C++ sample code).
As for the specific math problem:
// given formula: a/b + c/d = e/f
// = ( ( a*d + b*c ) / ( b*d ) )
// Define some variables here to save on copying
// (I assume that your class that holds the
// large numbers is called "ARITHMETIC")
ARITHMETIC numerator = a*d + b*c;
ARITHMETIC denominator = b*d;
ARITHMETIC gcd = GCD( numerator , denominator );
// because we know that GCD(e,f) is 1, this implies:
ARITHMETIC e = numerator / gcd;
ARITHMETIC f = denominator / gcd;

Very large float in python

I'm trying to construct a neural network for the Mnist database. When computing the softmax function I receive an error to the same ends as "you can't store a float that size"
code is as follows:
def softmax(vector): # REQUIRES a unidimensional numpy array
adjustedVals = [0] * len(vector)
totalExp = np.exp(vector)
print("totalExp equals")
print(totalExp)
totalSum = totalExp.sum()
for i in range(len(vector)):
adjustedVals[i] = (np.exp(vector[i])) / totalSum
return adjustedVals # this throws back an error sometimes?!?!
After inspection, most recommend using the decimal module. However when I've messed around with the values being used in the command line with this module, that is:
from decimal import Decimal
import math
test = Decimal(math.exp(720))
I receive a similar error for any values which are math.exp(>709).
OverflowError: (34, 'Numerical result out of range')
My conclusion is that even decimal cannot handle this number. Does anyone know of another method I could use to represent these very large floats.
There is a technique which makes the softmax function more feasible computationally for a certain kind of value distribution in your vector. Namely, you can subtract the maximum value in the vector (let's call it x_max) from each of its elements. If you recall the softmax formula, such operation doesn't affect the outcome as it reduced to multiplication of the result by e^(x_max) / e^(x_max) = 1. This way the highest intermediate value you get is e^(x_max - x_max) = 1 so you avoid the overflow.
For additional explanation I recommend the following article: https://nolanbconaway.github.io/blog/2017/softmax-numpy
With a value above 709 the function 'math.exp' exceeds the floating point range and throws this overflow error.
If, instead of math.exp, you use numpy.exp for such large exponents you will see that it evaluates to the special value inf (infinity).
All this apart, I wonder why you would want to produce such a big number (not sure you are aware how big it is. Just to give you an idea, the number of atoms in the universe is estimated to be in the range of 10 to the power of 80. The number you are trying to produce is MUCH larger than that).

Python 3.x Homework help. Sequential number guessing game.

We are supposed to make a number guessing game where depending on what difficulty the player chooses the game generates 4 or 5 numbers and the player is given all but the last, which they have to guess in 3 tries. The numbers have to be equal distances apart, and the numbers have to be within the 1 - 100 range.
So far I know what it will look like roughly.
def guesses:
function for accumulating tries as long as guesses_taken < 3
let user retry, or congratulate and offer to replay
def game_easy:
code for number generation, step value, etc
guesses()
def game_hard:
same code as easy mode, with the appropriate changes
guesses()
For the random numbers, all I have so far is this
guess_init = rand.int (1,100)
step = rand.int (1,20)
guess_init = guess_init + step
and just having it loop and add the step 4 or 5 times respectively.
Where I'm stuck is 1. How to ensure that none of the numbers generated exceed 100 (so it can't be a step of 1 starting at 98), and 2. how to print all but the last number generated.
What I was thinking was assigning the last number generated to a variable that the player input must match. But I was also thinking that if "guess_init" has ran through the loop, then it will already be holding the value of the last number and all Ill have to check is that user input == guess_init.
In your Case you should read the random section from the Python Standard Library. Especially this is relevant:
random.randrange(start, stop[, step])
Return a randomly selected element from range(start, stop, step). This is equivalent to choice(range(start, stop, step)), but doesn’t actually build a range object.

Resources