Function to find the least prime factor - factorization

Does PARI/GP have a function for finding the smallest prime factor of a t_INT or otherwise perform a partial factorization of an integer?
For example, if I have the number:
a=261432792226751124747858820445742044652814631500046047326053169701039080900441047539208779404889565067
it takes a long time to do factor(a) because a contains two huge prime factors. However, it is quite easy to find that 17 is a divisor of a.
Of course in this case I could have used just forprime(p=2,,a % p == 0 && return(p)) or a similar trial division to find the factor. But if the least factor had had 20 decimal figures, say, that would be impractical, and I might have wanted to use the sophisticated methods of factor in that case.
So it would be ideal if I could call factor with some kind of flag saying I will be happy with any partial factorization, or saying that all I care about is the smallest non-trivial divisor, etc.

A very simple partial answer to my question is that factor has an optional argument lim, so you can just say:
factor(a, 10^5)
for example, and only factors below 10^5 will appear in the result (the cofactor greater than 10^5 can be composite!).
The optional argument to factorint is entirely different, a bit-wise "flag", and it does not allow to specify a limit. That was probably what confused me. As an example:
factorint(a, 1+8)
selects flags 1 ("avoid MPQS") and 8 ("don't run final ECM").

Related

Python: Time and space complexity of gcd and recursive iterations

I’m studying for mid-terms and this is one of the questions from a past yr paper in university. (Questions stated below)
Given Euclid’s algorithm, we can write the function gcd.
def gcd(a,b):
if b == 0:
return a
else:
return gcd(b, a%b)
[Reduced Proper Fraction]
Consider the fraction, n/d , where n and d are positive integers.
If n < d and GCD(n,d) = 1, it is called a reduced proper fraction.
If we list the set of reduced proper fractions for n <=8 in ascending order of size, we get:
1/8,1/7,1/6,1/5,1/4,2/7,1/3,3/8,2/5,3/7,1/2,4/7,3/5,5/8,2/3,5/7,3/4,4/5,5/6,6/7,7/8
It can be seen that there are 21 elements in this set.
Implement the function count_fraction that takes an integer n and returns the number of reduced proper fractions for n. Assuming that the order of growth (in time) for gcd is O(logn), what is the order of growth in terms of time and space for the function you wrote in Part (B) in terms of n. Explain your answer.
Suggested answer.
def count_fraction(n):
if n==1:
return 0
else:
new = 0
for i in range(1,n):
if gcd(i,n) == 1:
new += 1
return new + count_fraction(n-1)
The suggested answer is pretty strange as the trend of this question in previous years, is designed to test purely recursive/purely iterative solutions, but it gave a mix. Nevertheless, I don’t understand why the suggested order of growth is given as such. (I will write it in the format, suggested answer, my answer and questions on my fundamentals)
Time: O(nlogn), since it’s roughly log1+log2+· · ·+log(n−1)+logn
My time: O(n^2 log n). Since there is n recursive function calls, each call has n-1 iterations, which takes O(log n) time due to gcd.
Question 1: Time in my opinion is counting number of iterations/recursions* time taken for 1 iteration/recursion. It’s actually my first time interacting with a mixed iterative/recursive solution so I don’t really know the interaction. Can someone tell me whether I'm right/wrong?
Space: O(n), since gcd is O(1) and this code is obviously linear recursion.
My space: O(n*log n). Since gcd is O(log n) and this code takes up O(n) space.
Question 2: Space in my opinion is counting number of recursions*space taken for 1 recursive call OR largest amount of space required among all iterations. In the first place, I would think gcd is O(log n) as I assume that recursion will happen log n times. I want to ask whether the discrepancy is due to what my lecturer said.
(I don’t really understand what my lecturers says about delayed operations for recursions on factorial or no new objects being formed in iteratives. How do u then accept the fact that there are NEW objects formed in recursion also no delayed operations in iteration).
If u can clarify my doubt on why gcd is O(1) instead of O(log n), I think if I take n*1 for recursion case, I would agree with the answer.
I agree with your analysis for of the running time. It should be O(n^2 log(n)), since you make n calls to gcd on each recursive call to count_fraction.
You're also partly right about the second question, but you get the conclusion wrong (and the supplied answer gets the right conclusion for the wrong reasons). The gcd function does indeed use O(log(n)) space, for the stack of the recursive calls. However, that space gets reused for each later call to gcd from count_fraction, so there's only ever one stack of size log(n). So there's no reason to multiply the log(n) by anything, only add it to whatever else might be using memory when the gcd calls are happening. Since there will also be a stack of size O(n) for the recursive calls of count_fraction, the smaller log(n) term can be dropped, so you say it takes O(n) space rather than O(n + log(n)).
All in all, I'd say this is a really bad assignment to be trying to learn from. Almost everything in it has an error somewhere, from the description saying it's limiting n when it's really limiting d, to the answers you describe which are all at least partly wrong.

Any algorithm that use a number as a feed for generating random string?

I want to generate a random string with any fixed length (N) of my choice. With the same number as a feed to this algorithm it should generate the same string. And with small change to the number like number+1, it should generate a completely different string. (Difficult to relate to the previous seed) It's ok if more than one number might result in the same string. Any approaches for doing this?
By the way, I have a set of characters that I want to appear in the string, like A-Z a-z 0-9.
For example
Algorithm(54893450,4,"ABCDEFG0") -> A0GF
Algorithm(54893451,4,"ABCDEFG0") -> BDCG
I could random each characters one by one, but it would need N different seed for each characters. If I want to do it this way, the question might become "how to generate N numbers from one number" for the seeds.
The end goal is that I want to convert a GUID to something more readable on printed media and shorter. I don't care about conflict. (If the conflict did happen, I can still check the GUID for resolution)
Ok, thanks for the guidance #Jim Mischel. I read all the related pages and come to understand more about this.
http://blog.mischel.com/2017/05/30/how-not-to-generate-unique-codes/
http://blog.mischel.com/2017/06/02/a-broken-unique-key-generator/
http://blog.mischel.com/2017/06/10/how-did-this-happen/
http://blog.mischel.com/2017/06/20/how-to-generate-random-looking-keys/
https://ericlippert.com/2013/11/12/math-from-scratch-part-thirteen-multiplicative-inverses/
https://ericlippert.com/2013/11/14/a-practical-use-of-multiplicative-inverses/
https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
In short, first I should use a sequential number. That is 1,2,3,4,... Very predictable, but it can turn into something random and hard to guess.
(Note that in my case this is not entirely possible, since each users will be generating his own ID locally so I cannot run a global sequential number, hence I use GUID. But I will make my own workaround to fit GUID to this solution, probably with a simple modulo on the GUID to fit it to my desired range.)
With sequential integer n I can get another seemingly unrelated integer with a multiplication then a modulo. This might looks like (n * x)% m with x and m of my choice. Of course m would have to be larger than the largest number that I want to use since it wraps around the modulo while multiplying.
This alone is a good start as close number n does not provide similar output. But we cannot be so sure about that. For example, if my x is 4 and m is 16 then the input can only produce 0,4,8,12. To avoid this we choose x and m which is a coprime of each other. (Having greatest common divisor of 1) There are many obvious candidate to this such as 100000 as m (defines the limit of my output as 99999) and 2429 as x. If we choose 2 coprime like this, not only the result conflict as less as possible, it also guarantee that each input produces unique output in that range.
We can learn from this example :
(n * 5) % 16
As 5 and 16 is a coprime, we can get a maximum length of sequence of unique numbers before it wraps around. (length = 16) if we input numbers sequentially from 0 to 16 :
Input : 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
Output : 0, 5, 10, 15, 4, 9, 14, 3, 8, 13, 2, 7, 12, 1, 6, 11, 0
We can see that the output is in a not so predictable sequence and also none of the output other than the last one is the same. It travels to all available number possible.
Now my very predictable sequential running number would produce sufficiently different number and also guarantee not to conflict to any other input as long as it is in the range of m. What's left is to convert this number to a string of my choice via base conversion. If I have 5 characters "ABCDE" then I will use base-5.
Only this is enough for my use case. But with the concept of multiplicative inverse I can also find one more integer y which can reverse that multiply modulo transformation to the original number. Currently I still haven't understand that part fully, but it uses Extended Euclidean Algorithm to find y.
Since my application does not need reverting yet I am fine with not understanding it for now. I will definitely try to understand that part.

Longest Palindromic Substring clarification

Approach #3 (Dynamic Programming) [Accepted]
To improve over the brute force solution, we first observe how we can
avoid unnecessary re-computation while validating palindromes.
Consider the case ''ababa''. If we already knew that
''bab'' is a palindrome, it is obvious that ''ababa'' must be a palindrome since the two left and right end
letters are the same.
This yields a straight forward DP solution, which we first initialize
the one and two letters palindromes, and work our way up finding all
three letters palindromes, and so on...
Complexity Analysis
Time complexity : O(n^2) This gives us a runtime
complexity of O(n^2).
Space complexity : O(n^2). It uses O(n^2)space
to store the table.
I read the above solution to this problem online, and have some questions about it (if this isn't the correct forum to post on please let me know). This is my understanding of how to do this problem: save all the one-char palindromes. Then for each of these, if the char to the left equals the char to the right, keep it. If that condition isn't met, cease dealing with this substring. Continue this until end is reached.
Is this correct? If so, how does this translate to O(N^2) algorithm? Is it because, in the worst case scenario, we have to run through the string N times to increment each potential palindrome by one char? This part isn't intuitive to me.
Your interpretation is correct.
In the worst case we need to check all substrings with increasing length. We first check all substrings of length 1, then all substrings of length 3 and so on. In addition we also need to keep palindromes of the kind "abba" into account, thus we also need to check all candidates with even length. So in the worst case, we need to validate every possible substring of a given input-string.
Total number of substrings of a given string of length n is n(n + 1)/2
n * (n + 1) / 2 = n ^ 2 / 2 + n / 2 = O(n ^ 2)
Doing a single validation-step for a palindrome can be done in O(1), thus the total runtime is O(n ^ 2).

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

Security: longer keys versus more available characters

I apologize if this has been answered before, but I was not able to find anything. This question was inspired by a comment on another security-related question here on SO:
How to generate a random, long salt for use in hashing?
The specific comment is as follows (sixth comment of accepted answer):
...Second, and more importantly, this will only return hexadecimal
characters - i.e. 0-9 and A-F. It will never return a letter higher
than an F. You're reducing your output to just 16 possible characters
when there could be - and almost certainly are - many other valid
characters.
– AgentConundrum Oct 14 '12 at 17:19
This got me thinking. Say I had some arbitrary series of bytes, with each byte being randomly distributed over 2^(8). Let this key be A. Now suppose I transformed A into its hexadecimal string representation, key B (ex. 0xde 0xad 0xbe 0xef => "d e a d b e e f").
Some things are readily apparent:
len(B) = 2 len(A)
The symbols in B are limited to 2^(4) discrete values while the symbols in A range over 2^(8)
A and B represent the same 'quantities', just using different encoding.
My suspicion is that, in this example, the two keys will end up being equally as secure (otherwise every password cracking tool would just convert one representation to another for quicker attacks). External to this contrived example, however, I suspect there is an important security moral to take away from this; especially when selecting a source of randomness.
So, in short, which is more desirable from a security stand point: longer keys or keys whose values cover more discrete symbols?
I am really interested in the theory behind this, so an extra bonus gold star (or at least my undying admiration) to anyone who can also provide the math / proof behind their conclusion.
If the number of different symbols usable in your password is x, and the length is y, then the number of different possible passwords (and therefore the strength against brute-force attacks) is x ** y. So you want to maximize x ** y. Both adding to x or adding to y will do that, Which one makes the greater total depends on the actual numbers involved and what your practical limits are.
But generally, increasing x gives only polynomial growth while adding to y gives exponential growth. So in the long run, length wins.
Let's start with a binary string of length 8. The possible combinations are all permutations from 00000000 and 11111111. This gives us a keyspace of 2^8, or 256 possible keys. Now let's look at option A:
A: Adding one additional bit.
We now have a 9-bit string, so the possible values are between 000000000 and 111111111, which gives us a keyspace size of 2^9, or 512 keys. We also have option B, however.
B: Adding an additional value to the keyspace (NOT the keyspace size!):
Now let's pretend we have a trinary system, where the accepted numbers are 0, 1, and 2. Still assuming a string of length 8, we have 3^8, or 6561 keys...clearly much higher.
However! Trinary does not exist!
Let's look at your example. Please be aware I will be clarifying some of it, which you may have been confused about. Begin with a 4-BYTE (or 32-bit) bitstring:
11011110 10101101 10111110 11101111 (this is, btw, the bitstring equivalent to 0xDEADBEEF)
Since our possible values for each digit are 0 or 1, the base of our exponent is 2. Since there are 32 bits, we have 2^32 as the strength of this key. Now let's look at your second key, DEADBEEF. Each "digit" can be a value from 0-9, or A-F. This gives us 16 values. We have 8 "digits", so our exponent is 16^8...which also equals 2^32! So those keys are equal in strength (also, because they are the same thing).
But we're talking about REAL passwords, not just those silly little binary things. Consider an alphabetical password with only lowercase letters of length 8: we have 26 possible characters, and 8 of them, so the strength is 26^8, or 208.8 billion (takes about a minute to brute force). Adding one character to the length yields 26^9, or 5.4 trillion combinations: 20 minutes or so.
Let's go back to our 8-char string, but add a character: the space character. now we have 27^8, which is 282 billion....FAR LESS than adding an additional character!
The proper solution, of course, is to do both: for instance, 27^9 is 7.6 trillion combinations, or about half an hour of cracking. An 8-character password using upper case, lower case, numbers, special symbols, and the space character would take around 20 days to crack....still not nearly strong enough. Add another character, and it's 5 years.
As a reference, I usually make my passwords upwards of 16 characters, and they have at least one Cap, one space, one number, and one special character. Such a password at 16 characters would take several (hundred) trillion years to brute force.

Resources