Question about relations between two numbers - factorization

Is there is any relation between numbers' bits when one is divisible by another? What is the relation between the bits of 36 and the bit sequences of 9 or 4 or 12, or between 10 (1010) and 5 (101), or 21 (10101) and 7 (00111)?
Thanks. I am sorry if some sentence is not correct, but I hope you understand what I want.

I know this is not exactly what you're asking, but it may be helpful. There are many tricks for establishing binary number divisibility by manipulation of bits. For example a binary number is divisible by three if the sum of its even binary bits minus the sum of its odd binary bits all modulus 3 is zero. Here's a link discussing binary divisibility.

Let's take the example of 36.
36 = 0010 0100
36 is 4 * 9, that is
4 = 0100
9 = 1001
If you multiply them (like you would on a normal multiplication) you'll have
0100 x
1001
--------
0100
0000
0000
0100
-------
0100100
So essentially 0100 x 1001 = 0010 0100 (you can repeat the same for any other pair of divisors of course)
Now, is there any special relation that will allow you to get all the divisors of 36 just by looking at its bits? The answer, alas, is no :)
EDIT: there is no KNOWN relation at least but, who knows, in the future maybe some smart mathematician will find one. As of today, the answer is still no.

So you want to know if you can 'quickly' do Integer Factorization by just looking at the bits?
Good luck with that!

Obviously, that a is a multiple of b can be recognized given the binary representions of a and b (it's what the hardware does when executing the following code
boolean isMultiple = a % b == 0;
) and hence there is such a relationship.
Ask a more specific question to get a more specific response ...

The easiest to see is the number of consecutive 0 in the least significant digits designates the largest power of two that is a factor of your number n. There are apparently other tests, as DonnyD pointed out (I hadn't known that one) but I expect they're not going scale very well. If they did, public key cryptography, as it's generally implemented, would quickly become a thing of the past.
That's not to say that such methods can't be discovered / invented. For instance it's been shown that arbitrarily large numbers can be easily factored using quantum methods, but nobody's ever been actually able to implement a working system.
The bottom line is that we've entrusted our online financial system and national security apparatus to PKI based methods primarily because we assume that factoring numbers is hard for arbitrarily large numbers. But as Moron seemed to be implying in his answer, you're welcome to give it a whirl.

Related

Memoization in J

Every time I use J's M. adverb, performance degrades considerably. Since I suspect Iverson and Hui are far smarter than I, I must be doing something wrong.
Consider the Collatz conjecture. There seem to be all sorts of opportunities for memoization here, but no matter where I place M., performance is terrible. For example:
hotpo =: -:`(>:#(3&*))#.(2&|) M.
collatz =: hotpo^:(>&1)^:a:"0
##collatz 1+i.10000x
Without M., this runs in about 2 seconds on my machine. With M., well, I waited over ten minutes for it to complete and eventually gave up. I've also placed M. in other positions with similarly bad results, e.g.,
hotpo =: -:`(>:#(3&*))#.(2&|)
collatz =: hotpo^:(>&1)M.^:a:"0
##collatz 1+i.10000x
Can someone explain the proper usage of M.?
The M. does nothing for you here.
Your code is constructing a chain, one link at a time:
-:`(>:#(3&*))#.(2&|)^:(>&1)^:a:"0 M. 5 5
5 16 8 4 2 1
5 16 8 4 2 1
Here, it remembers that 5 leads to 16, 16 leads to 8, 8 leads to 4, etc... But what does that do for you? It replaces some simple arithmetic with a memory lookup, but the arithmetic is so trivial that it's likely faster than the lookup. (I'm surprised your example takes 10 whole minutes, but that's beside the point.)
For memoization to make sense, you need to be replace a more expensive computation.
For this particular problem, you might want a function that takes an integer and returns a 1 if and when the sequence arrives at 1. For example:
-:`(>:#(3&*))#.(2&|)^:(>&1)^:_"0 M. 5 5
1 1
All I did was replace the ^:a: with ^:_, to discard the intermediate results. Even then, it doesn't make much difference, but you can use timespacex to see the effect:
timespacex '-:`(>:#(3&*))#.(2&|)^:(>&1)^:_"0 i.100000'
17.9748 1.78225e7
timespacex '-:`(>:#(3&*))#.(2&|)^:(>&1)^:_"0 M. i.100000'
17.9625 1.78263e7
Addendum: The placement of the M. relative to the "0 does seems to make
a huge difference. I thought I might have made a mistake there, but a quick test showed that swapping them caused a huge performance loss in both time and space:
timespacex '-:`(>:#(3&*))#.(2&|)^:(>&1)^:_ M. "0 i.100000'
27.3633 2.41176e7
M. preserves the rank of the underlying verb, so the two are semantically equivalent, but I suspect with the "0 on the outside like this, the M. doesn't know that it's dealing with scalars. So I guess the lesson here is to make sure M. knows what it's dealing with. :)
BTW, if the Collatz conjecture turned out to be false, and you fed this code a counterexample, it would go into an infinite loop rather than produce an answer.
To actually detect a counterexample, you'd want to monitor the intermediate results until you found a cycle, and then return the lowest number in the cycle. To do this, you'd probably want to implement a custom adverb to replace ^:n.

Changing sign of a 64 bit floating point number in verilog?

So I am working with 64 bit floating point numbers in Verilog for synthesis, ideally I would like to do -A*B, where A and B are the two numbers. I have got past doing A*B, so is it okay now if I just change the value of the first bit 0 to 1 or 1 to 0 to make it represent -A*B.
kinda like,
A[0]=~A[0];
Thanks in advance for any suggestion.
Yes! That's all there is to it.
Keep in mind that negating 0 will give you -0. (They're different floating-point bit patterns.) Whether this matters to you will depend on your application.

Security: longer keys versus more available characters

I apologize if this has been answered before, but I was not able to find anything. This question was inspired by a comment on another security-related question here on SO:
How to generate a random, long salt for use in hashing?
The specific comment is as follows (sixth comment of accepted answer):
...Second, and more importantly, this will only return hexadecimal
characters - i.e. 0-9 and A-F. It will never return a letter higher
than an F. You're reducing your output to just 16 possible characters
when there could be - and almost certainly are - many other valid
characters.
– AgentConundrum Oct 14 '12 at 17:19
This got me thinking. Say I had some arbitrary series of bytes, with each byte being randomly distributed over 2^(8). Let this key be A. Now suppose I transformed A into its hexadecimal string representation, key B (ex. 0xde 0xad 0xbe 0xef => "d e a d b e e f").
Some things are readily apparent:
len(B) = 2 len(A)
The symbols in B are limited to 2^(4) discrete values while the symbols in A range over 2^(8)
A and B represent the same 'quantities', just using different encoding.
My suspicion is that, in this example, the two keys will end up being equally as secure (otherwise every password cracking tool would just convert one representation to another for quicker attacks). External to this contrived example, however, I suspect there is an important security moral to take away from this; especially when selecting a source of randomness.
So, in short, which is more desirable from a security stand point: longer keys or keys whose values cover more discrete symbols?
I am really interested in the theory behind this, so an extra bonus gold star (or at least my undying admiration) to anyone who can also provide the math / proof behind their conclusion.
If the number of different symbols usable in your password is x, and the length is y, then the number of different possible passwords (and therefore the strength against brute-force attacks) is x ** y. So you want to maximize x ** y. Both adding to x or adding to y will do that, Which one makes the greater total depends on the actual numbers involved and what your practical limits are.
But generally, increasing x gives only polynomial growth while adding to y gives exponential growth. So in the long run, length wins.
Let's start with a binary string of length 8. The possible combinations are all permutations from 00000000 and 11111111. This gives us a keyspace of 2^8, or 256 possible keys. Now let's look at option A:
A: Adding one additional bit.
We now have a 9-bit string, so the possible values are between 000000000 and 111111111, which gives us a keyspace size of 2^9, or 512 keys. We also have option B, however.
B: Adding an additional value to the keyspace (NOT the keyspace size!):
Now let's pretend we have a trinary system, where the accepted numbers are 0, 1, and 2. Still assuming a string of length 8, we have 3^8, or 6561 keys...clearly much higher.
However! Trinary does not exist!
Let's look at your example. Please be aware I will be clarifying some of it, which you may have been confused about. Begin with a 4-BYTE (or 32-bit) bitstring:
11011110 10101101 10111110 11101111 (this is, btw, the bitstring equivalent to 0xDEADBEEF)
Since our possible values for each digit are 0 or 1, the base of our exponent is 2. Since there are 32 bits, we have 2^32 as the strength of this key. Now let's look at your second key, DEADBEEF. Each "digit" can be a value from 0-9, or A-F. This gives us 16 values. We have 8 "digits", so our exponent is 16^8...which also equals 2^32! So those keys are equal in strength (also, because they are the same thing).
But we're talking about REAL passwords, not just those silly little binary things. Consider an alphabetical password with only lowercase letters of length 8: we have 26 possible characters, and 8 of them, so the strength is 26^8, or 208.8 billion (takes about a minute to brute force). Adding one character to the length yields 26^9, or 5.4 trillion combinations: 20 minutes or so.
Let's go back to our 8-char string, but add a character: the space character. now we have 27^8, which is 282 billion....FAR LESS than adding an additional character!
The proper solution, of course, is to do both: for instance, 27^9 is 7.6 trillion combinations, or about half an hour of cracking. An 8-character password using upper case, lower case, numbers, special symbols, and the space character would take around 20 days to crack....still not nearly strong enough. Add another character, and it's 5 years.
As a reference, I usually make my passwords upwards of 16 characters, and they have at least one Cap, one space, one number, and one special character. Such a password at 16 characters would take several (hundred) trillion years to brute force.

scale 14 bit word to an 8 bit word

I'm working on a project where I sample a signal with an ADC, that represents values as 14 bit words. I need to scale the values to 8 bit words. What's a good way to go about this in general. By the way, I'm using an FPGA so I'd like to do it in "hardware" rather than a software solution. Also in case you're wondering the chain of events will be: sample analog signal, represent sample value with 14 bit word, scale 14 bit word to 8 bit word, transmit 8 bit word with UART to PC COM1.
I've never done this before. I was assuming you use quantization levels, but I'm not sure what an efficient circuit for this operation would be. Any help would be appreciated.
Thanks
You just need an add and a shift:
val_8 = (val_14 + 32) >> 6;
(The + 32 is necessary to get correct rounding - you can omit it but you will get more truncation noise in your signal if you do.)
I think you just drop the six lowest resolution bits and call it good, right? But I might not fully understand the problem statement.
Paul's algorithm is correct, but you'll need some bounds checking.
assign val_8 = (&val_14[13:5]) ? //Make sure your sum won't overflow
8'hFF : //Assign all 1's if it will
val_14[13:6] + val_14[5];

Multiplying 23 bit datatypes in a system with no long long

I am trying to implement floating point operations in a microcontroller and so far I have had ample success.
The problem lies in the way I do multiplication in my computer and it works fine:
unsigned long long gig,mm1,mm2;
unsigned long m,m1,m2;
mm1 = f1.float_parts.mantissa;
mm2 = f2.float_parts.mantissa;
m1 = f1.float_parts.mantissa;
m2 = f2.float_parts.mantissa;
gig = mm1*mm2; //this works fine I get all the bits I need since they are all long long, but won't work in the mcu
gig = m1*m2//this does not work, to be precise it gives only the 32 least significant bits , but works on the mcu
So you can see that my problem is that the microcontroller will throw an undefined refence to __muldi3 if I try the gig = mm1*mm2 there.
And If I try with the smaller data types, it only keeps the least significant bits, which I don't want it to. I need the 23 msb bits of the product.
Does anyone have any ideas as to how I can do this?
Apologizes for the short answer, I hope that someone else will take the time to write a fuller explanation, but basically you do exactly as when you multiply two big numbers by hand on a paper! It's just that instead of working with base 10, you work in base 256. That is, treat your numbers as a byte vectors, and do with each byte what you do to a digit when you "hand multiply".
The comments in the FreeBSD implementation of __muldi3() have a good explanation of the required procedure, see muldi3.c. If you want to go straight to the source (always a good idea!), according to the comments this code was based on an algorithm described in Knuth's The Art of Computer Programming vol. 2 (2nd ed), section 4.3.3, p. 278. (N.B. the link is for the 3rd edition.)
Back on the Intel 8088 (the original PC CPU and the last CPU I wrote assembly code for) when you multiplied two 16 bit numbers (32 bits? whoow) the CPU would return 2 16 bit numbers in two different registers - one with the 16 msb and one with the lsb.
You should check the hardware capabilities of your micro-controller, maybe it has a similar setup (obviously you'll need the code this in assembly if it does).
Otherwise you'll have to implement multiplication on your own.

Resources