Partial Homomorphic Encryption with Haskell - haskell

I am working with a fairly simple Paillier partial homomorphic encryption library in Haskell. The API for the library is here - https://hackage.haskell.org/package/Paillier-0.1.0.3/docs/Crypto-Paillier.html
This library unfortunately does not handle negative numbers or floating point numbers. It only operates on positive integers. So, an operation like
decrypt prvKey pubKey (encrypt pubKey (-10)) =/= -10
My naive approach to handling negative numbers and floating points was to multiply the Integers with my desired precision (say 10^6) and then convert it back. But internally, some modular arithmetic does not work for homomorphic multiplication (exponentiation).
The problem could somewhat be boiled down to finding a good encoding of negative numbers as well as floating point numbers into an Integer type (the arbitrary precision integer type in Haskell). Is there any good strategy to do the encoding?
Another strategy could be modifying the modular exponentiation function in the cryptonite package to handle modular arithmetic for negative numbers (https://hackage.haskell.org/package/cryptonite-0.30/docs/Crypto-Number-ModArithmetic.html#v:expSafe).
Can anyone suggest to me the best strategy or something that I am missing out here?

Related

Unclear why functions from Data.Ratio are not exposed and how to work around

I am implementing an algorithm using Data.Ratio (convergents of continued fractions).
However, I encounter two obstacles:
The algorithm starts with the fraction 1%0 - but this throws a zero denominator exception.
I would like to pattern match the constructor a :% b
I was exploring on hackage. An in particular the source seems to be using exactly these features (e.g. defining infinity = 1 :% 0, or pattern matching for numerator).
As beginner, I am also confused where it is determined that (%), numerator and such are exposed to me, but not infinity and (:%).
I have already made a dirty workaround using a tuple of integers, but it seems silly to reinvent the wheel about something so trivial.
Also would be nice to learn how read the source which functions are exposed.
They aren't exported precisely to prevent people from doing stuff like this. See, the type
data Ratio a = a:%a
contains too many values. In particular, e.g. 2/6 and 3/9 are actually the same number in ℚ and both represented by 1:%3. Thus, 2:%6 is in fact an illegal value, and so is, sure enough, 1:%0. Or it might be legal but all functions know how to treat them so 2:%6 is for all observable means equal to 1:%3 – I don't in fact know which of these options GHC chooses, but at any rate it's an implementation detail and could change in future releases without notice.
If the library authors themselves use such values for e.g. optimisation tricks that's one thing – they have after all full control over any algorithmic details and any undefined behaviour that could arise. But if users got to construct such values, it would result in brittle code.
So – if you find yourself starting an algorithm with 1/0, then you should indeed not use Ratio at all there but simply store numerator and denominator in a plain tuple, which has no such issues, and only make the final result a Ratio with %.

How convert string to double keeping the exact same number represented in the string

The code below would result in moneyDouble = 365.24567874299998 and I need it to be exactly 365.245678743
I wouldn't mind having to set a precision and getting some extra zeros to the right.
This number is used to calculate money transaction so it needs to be exact.
std::string money ("365.245678743");
std::string::size_type sz; // alias of size_t
double moneyDouble = std::stod (money,&sz);
Floating-point numbers and exact precision don't mix, period [link]. For this reason, monetary calculations should never be done in floating-point [link]. Use a fixed-point arithmetic library, or just use integer arithmetic and interpret it as whatever fractional part you need. Since your precision requirements seem to be very high (lots of decimals), a big number library might be necessary.
While library recommendations are off-topic on Stack Overflow, this old question seems to offer a good set of links to libraries you might find useful.
The result of your erroneous output of moneyDouble is because moneyDouble is a floating point number. They cannot express tenths, hundredths, thousandths, etc exactly.
Furthermore, floating-point numbers are really expressed in binary form, meaning that only (some) binary numbers can be expressed exactly in floating point. Not to mention that they have finite accuracy, so they can only store only a limited number of digits (including those after the decimal point).
Your best bet is to use fixed-point arithmetic, integer arithmetic, or implement a rational-number class, and you might need number libraries since you may have to deal with very big numbers in very high precision.
See Is floating point math broken? for more information about the unexpected results of floating-point accuracy.

Why does System.Numerics.Complex use doubles instead of decimals?

I've been working with System.Numerics.Complex recently, and I've started to notice the typical floating-point "drift" where the value stored gets calculated a tenth of a millionth off or something like that, which is well-known and common with the float type and even the double type. I looked into the Complex struct, and sure enough, it used double variables. Why does it use double values to store its data and not decimal values, which are designed to prevent this? How do I work around this?
To answer your question:
doubles are several orders of magnitude faster, as operations are done at the hardware level
base-2 floats can actually be more accurate for large computations, as there is less "wobble" when shifting up and down exponents: 1 bit of precision is less than 1 decimal digit. Moreover, base-2 can use an implicit leading bit, which means they can represent more numbers than other bases.
complex numbers are typically used for scientific/engineering applications, where small relative errors of approx 10-16 are outweighed by other sources of error (e.g. due to measurement or the model).
decimals on the other hand are typically used for "accounting" type operations, where round-off error is typically negligible (i.e. addition of small numbers, multiplication by integers, etc.)

High precision floating point numbers in Haskell?

I know Haskell has native data types which allow you to have really big integers so things like
>> let x = 131242358045284502395482305
>> x
131242358045284502395482305
work as expected. I was wondering if there was a similar "large precision float" native structure I could be using, so things like
>> let x = 5.0000000000000000000000001
>> x
5.0000000000000000000000001
could be possible. If I enter this in Haskell, it truncates down to 5 if I go beyond 15 decimal places (double precision).
Depending on exactly what you are looking for:
Float and Double - pretty much what you know and "love" from Floats and Doubles in all other languages.
Rational which is a Ratio of Integers
FixedPoint - This package provides arbitrary sized fixed point values. For example, if you want a number that is represented by 64 integral bits and 64 fractional bits you can use FixedPoint6464. If you want a number that is 1024 integral bits and 8 fractional bits then use $(mkFixedPoint 1024 8) to generate type FixedPoint1024_8.
EDIT: And yes, I just learned about the numbers package mentioned above - very cool.
Haskell does not have high-precision floating-point numbers naitively.
For a package/module/library for this purpose, I'd refer to this answer to another post. There's also an example which shows how to use this package, called numbers.
If you need a high precision /fast/ floating point calculations, you may need to use FFI and long doubles, as the native Haskell type is not implemented yet (see https://ghc.haskell.org/trac/ghc/ticket/3353).
I believe the standard package for arbitrary precision floating point numbers is now https://hackage.haskell.org/package/scientific

Quadratic sieve and nth powers

I implemented the quadratic sieve in Haskell according to the basic algorithm specified on the Wikipedia page. It works great on most integers, however it fails to find a factorization on numbers N that are nth powers. For even powers (squares), the algorithm loops, and for odd powers I find several smooth numbers that are squares mod N (I have tested and confirmed this), yet every single derived congruence of squares (also tested and confirmed) leads only to a trivial factor.
I am reasonably sure that I implemented the Wikipedia algorithm to the letter. Is there a problem with that version of the algorithm that prevents it from handling nth powers, or is there a bug in my algorithm?
For some reason stackoverflow is having an issue formatting my code, so here you go: http://pastebin.com/miUxHKCh
The quadratic sieve, as I understand it, is not designed to definitely factor a number. Rather, it is designed to, in the typical case, usually factor a number.
The wikipedia entry, for example, at least as of today, describes what it presents as a "standard quadratic sieve without logarithm optimizations or prime powers". So it explicitly does not take prime powers into account.
Furthermore, as I understand it, factorization of numbers close to prime powers also doesn't work well in more efficient variations of the algorithm.
So the fault is not in your code, it is in the way the algorithm is usually presented (which glosses over issues such as whether it always works or just typically works, etc :-))

Resources