how to get the no of factors of a number within range? - factorization

Generally I do prime factorization and get all prime factors and I do permutation and combinations to find all factors.
For example: 1824 is the number I am trying to factors of. Now I need a no factors of 1824 within in number 300.
Is there any trick??

One trick, is not searching numbers past the square root of the number you're searching for factors. For example, to find factors from 2-300, you only really need to search from 2-ceil(sqrt(1824)), which is 2-43. Once you find a number in the 2-43 range, divide it into 1824 to check for other factors which may be above 43.

As a brute force solution , you don't need to prime factorize the number for this. You could simply check for all numbers in the range.
Let the range of numbers in which you wish to find factors be [range_start, range_end].
Simply iterate over these numbers in a loop and for each number (say x) , check if (number % x == 0 ) , if yes , then x is a factor of the number.

Related

Time complexity of my backtracking to find the optimal solution of the maximum sum non adjacent

I'm trying to do dynamic programming backtracking of maximum sum of non adjacent elements to construct the optimal solution to get the max sum.
Background:
Say if input list is [1,2,3,4,5]
The memoization should be [1,2,4,6,9]
And my maximum sum is 9, right?
My solution:
I find the first occurence of the max sum in memo (as we may not choose the last item) [this is O(N)]
Then I find the previous item chosen by using this formula:
max_sum -= a_list[index]
As in this example, 9 - 5 = 4, which 4 is on index 2, we can say that the previous item chosen is "3" which is also on the index 2 in the input list.
I find the first occurence of 4 which is on index 2 (I find the first occurrence because of the same concept in step 1 as we may have not chosen that item in some cases where there are multiple same amounts together) [Also O(N) but...]
The issue:
The third step of my solution is done in a while loop, let's say the non adjacent constraint is 1, the max amount we have to backtrack when the length of list is 5 is 3 times, approx N//2 times.
But the 3rd step, uses Python's index function to find the first occurence of the previous_sum [which is O(N)] memo.index(that_previous_sum)
So the total time complexity is about O(N//2 * N)
Which is O(N^2) !!!
Am I correct on the time complexity? Or am I wrong? Is there a more efficient way to backtrack the memoization list?
P.S. Sorry for the formatting if I done it wrong, thanks!
Solved:
I looped from behind checking if the item in front is same or not
If it's same, means it's not first occurrence. If not, it's first occurrence.
Tada! No Python's index function to find from the first index! We find it now from the back
So the total time complexity is about O(N//2 * N)
Now O(N//2 + 1), which is O(N).

Delete as few as possible digits to make number divisible by 3

I was solving this question, namely we have given number N, which can be very big, it can have up to 100000 digits.
Now I want to know what is the most efficient way to find those digits, and I think that in big numbers I will need to delete at most 3 digits to make the number divisible by 3.
I know that number is divisible by three if the sum of its digits is divisible by three, but I can't think of how can we use this.
My idea is to brute force over the string and to check if we delete that digit is the number going to be divisible by 3, but my solution fails at complex examples. Please give me some hints.
Thanks in advance.
If the sum of the digits modulo 3 is equal to 1, you want to delete a single 1, 4, or 7. If the sum of the digits is 2, you want to delete a single 2, 5, or 8.
If that can't be done, then you have to delete two digits.
To avoid scanning the list twice, you could remember the indices of up to two digits congruent to 1, and the indices of up to two digits congruent to 2, so when you compute the final modulus you know where to look.
The number 3 has some special properties relative to a base-10 number system that you can leverage.
10 is 1 more than 9, and 9 is evenly divisible by 3, so the "1" in "10" acts as a sort of remainder from adding 1 to 9. As a result, if the sum of all digits in the number is evenly divisible by 3 then that number is also divisible by 3.
So if you begin by figuring out what the modulo is after adding all the digits, then you'll know whether the number is divisible by zero (i.e. results in a modulo zero) or not. If not, then you can subtract one digit at a time, recalculating the modulo of the resulting number until you end up with a modulo of zero.
You should check what makes a number divisible by 3. If you find it you should divide the problem into smaller problems

representing a number in the octal system

I am not looking for help with my homework. I just need someone to show me the direction to do it.
I know the answer theoretically. I just stuck with idea of how to prove it mathematically.
here is the question.
Representing a number in the octal system require, on the average, about 10 percent more characters than in the decimal system.
How can I prove this mathematically?
Suppose you wanted to represent a given number x in both systems. In the decimal system, this will take in the order of log10(x) digits. In the octal system, it will take in the order of log8(x) digits.
For any a and b, loga(b) can be written as logc(b)/logc(a) for a given c. In particular, let c=10. Therefore, log8(x) = log10(x)/log10(8) ~= 1.1 log10(x), which means log8(x) is about 1.1 times greater than log10(x) for any given x. Note that this result is exact aside from the rounding. What is not exact is approximating the number of digits by log10(x) and log8(x).
The approximative number of decimal digits required for representing a number is : log10(x), and the number of octal digits is : log8(x)
Which means that the average ratio is log8(x)/log10(x)
As log8(x) = ln(x)/ln(8) and log10(x) = ln(x)/ln(10)
The average ratio is ln(10)/ln(8) = 1.1073...
Of course this is not a 100% exact demonstration, a real demonstration would define exactly the number we are trying to find (such as the average number of digits for numbers between 0 and n when n goes to infinity, etc...) and would compute the exact number of digits (which is an integer) and not an approximation.

In Excel, how to round to nearest fibonacci number

In Excel, I would like to round to the nearest fibonacci number.
I tried something like (sorry with a french Excel):
RECHERCHEH(C7;FIBO;1;VRAI) -- HLOOKUP(C7, FIBO, 1, TRUE)
where FIBO is a named range (0; 0,5; 1;2;3;5;8;etc.)
my problem is that this function rounds to the smallest number and not the nearest. For example 12.8 is rounded to 8 and not 13.
Note: I just want to use an excel formula, and no VBA
This will work:
=INDEX(FIBO,1, IF(C7>=(INDEX(FIBO,1,(MATCH(C7,FIBO,1)))+
INDEX(FIBO,1,(MATCH(C7,FIBO,1)+1)))/2, MATCH(C7,FIBO,1)+1, MATCH(C7,FIBO,1)))
Define the target number Targ, relative to which we want to find the closest Fib number.
Define
n = INT(LN(Targ*SQRT(5))/LN((1+SQRT(5))/2))
It follows that Fib(n) <= Targ <= Fib(n+1)
where one can compute Fib(n) and Fib(n+1) via
Fib(n) = ROUND(((1+SQRT(5))/2)^n/SQRT(5),0)
Finally find the closest Fib number to Targ using the computed values of Fib(n) and Fin(n+1).
Not as compact as the other solution presented since it requires a few helper formulas, but it requires no table for Fib numbers.
I used a simpler nested IF solution.
I calculated the mid point between each pair of Fibonacci numbers and used that as the decision point. The following tests the value in A2 to produce the desired Fibonacci number:
=IF(A2>=30,40,IF(A2>=16.5,20,IF(A2>=10.5,13,IF(A2>=6.5,8,IF(A2>=4,5,IF(A2>=2.5,3,IF(A2>=1.5,2,IF(A2>=0.5,1,0))))))))

Probability question: Estimating the number of attempts needed to exhaustively try all possible placements in a word search

Would it be reasonable to systematically try all possible placements in a word search?
Grids commonly have dimensions of 15*15 (15 cells wide, 15 cells tall) and contain about 15 words to be placed, each of which can be placed in 8 possible directions. So in general it seems like you can calculate all possible placements by the following:
width*height*8_directions_to_place_word*number of words
So for such a grid it seems like we only need to try 15*15*8*15 = 27,000 which doesn't seem that bad at all. I am expecting some huge number so either the grid size and number of words is really small or there is something fishy with my math.
Formally speaking, assuming that x is number of rows and y is number of columns you should sum all the probabilities of every possible direction for every possible word.
Inputs are: x, y, l (average length of a word), n (total words)
so you have
horizontally a word can start from 0 to x-l and going right or from l to x going left for each row: 2x(x-l)
same approach is used for vertical words: they can go from 0 to y-l going down or from l to y going up. So it's 2y(y-l)
for diagonal words you shoul consider all possible start positions x*y and subtract l^2 since a rect of the field can't be used. As before you multiply by 4 since you have got 4 possible directions: 4*(x*y - l^2).
Then you multiply the whole result for the number of words included:
total = n*(2*x*(x-l)+2*y*(y-l)+4*(x*y-l^2)

Resources