Modular multiplication [duplicate] - rust

This question already has answers here:
How can I avoid overflow in modular multiplication?
(2 answers)
Closed 7 months ago.
Given a, b and m, all of type u64 (or u128), I want to calculate (a * b) % m. However, a and b may be large, so an overflow may occur.
While I could cast them to BigUInt, multiply them and then cast the result back, this seems a bit unelegant to me. (Since the casting to BigUInt is theoretically unnecessary.)
I also found the modular arithmetic crate, but it seems to be unmaintained, undocumented and the multiplication is very slow in case of an overflow. Also, there is the modular crate, but there an overflow can occur.
So, is there a more elegant way to do modular multiplication in Rust?

If m * m is small enough to not overflow, you can use (a % m) * (b % m) % m to keep your multiplicands smaller, and it produces the same result as a * b % m because of neat modular properties.

So, is there a more elegant way to do modular multiplication in Rust?
Unroll the multiplication to a sequence of modular additions?
Even if the additions can overflow (e.g. 190u8 + 170u8 mod 200u8), you can probably handle the case with overflow_add and adjust backwards.

Related

Type constraints on dimensionality of vectors in F# and Haskell (Dependent Types)

I'm new to F# and Haskell and am implementing a project in order to determine which language I would prefer to devote more time to.
I have a numerous situations where I expect a given numerical type to have given dimensions based on parameters given to a top-level function (ie, at runtime). For example, in this F# snippet, I have
type DataStreamItem = LinearAlgebra.Vector<float32>
type Ball =
{R : float32;
X : DataStreamItem}
and I expect all instances of type DataStreamItem to have D dimensions.
My question is in the interests of algorithm development and debugging since such shape-mismatche-bugs can be a headache to pin down but should be a non-issue when the algorithm is up-and-running:
Is there a way, in either F# or Haskell, to constrain DataStreamItem and / or Ball to have dimensions of D? Or do I need to resort to pattern matching on every calculation?
If the latter is the case, are there any good, light-weight paradigms to catch such constraint violations as soon as they occur (and that can be removed when performance is critical)?
Edit:
To clarify the sense in which D is constrained:
D is defined such that if you expressed the algorithm of the function main(DataStream) as a computation graph, all of the intermediate calculations would depend on the dimension of D for the execution of main(DataStream). The simplest example I can think of would be a dot-product of M with DataStreamItem: the dimension of DataStream would determine the creation of dimension parameters of M
Another Edit:
A week later, I find the following blog outlining precisely what I was looking for in dependant types in Haskell:
https://blog.jle.im/entry/practical-dependent-types-in-haskell-1.html
And Another:
This reddit contains some discussion on Dependent Types in Haskell and contains a link to the quite interesting dissertation proposal of R. Eisenberg.
Neither Haskell not F# type system is rich enough to (directly) express statements of the sort "N nested instances of a recursive type T, where N is between 2 and 6" or "a string of characters exactly 6 long". Not in those exact terms, at least.
I mean, sure, you can always express such a 6-long string type as type String6 = String6 of char*char*char*char*char*char or some variant of the sort (which technically should be enough for your particular example with vectors, unless you're not telling us the whole example), but you can't say something like type String6 = s:string{s.Length=6} and, more importantly, you can't define functions of the form concat: String<n> -> String<m> -> String<n+m>, where n and m represent string lengths.
But you're not the first person asking this question. This research direction does exist, and is called "dependent types", and I can express the gist of it most generally as "having higher-order, more powerful operations on types" (as opposed to just union and intersection, as we have in ML languages) - notice how in the example above I parametrize the type String with a number, not another type, and then do arithmetic on that number.
The most prominent language prototypes (that I know of) in this direction are Agda, Idris, F*, and Coq (not really the full deal AFAIK). Check them out, but beware: this is kind of the edge of tomorrow, and I wouldn't advise starting a big project based on those languages.
(edit: apparently you can do certain tricks in Haskell to simulate dependent types, but it's not very convenient, and you have to enable UndecidableInstances)
Alternatively, you could go with a weaker solution of doing the checks at runtime. The general gist is: wrap your vector types in a plain wrapper, don't allow direct construction of it, but provide constructor functions instead, and make those constructor functions ensure the desired property (i.e. length). Something like:
type Stream4 = private Stream4 of DataStreamItem
with
static member create (item: DataStreamItem) =
if item.Length = 4 then Some (Stream4 item)
else None
// Alternatively:
if item.Length <> 4 then failwith "Expected a 4-long vector."
item
Here is a fuller explanation of the approach from Scott Wlaschin: constrained strings.
So if I understood correctly, you're actually not doing any type-level arithmetic, you just have a “length tag” that's shared in a chain of function calls.
This has long been possible to do in Haskell; one way that I consider quite elegant is to annotate your arrays with a standard fixed-length type of the desired length:
newtype FixVect v s = FixVect { getFixVect :: VU.Vector s }
To ensure the correct length, you only provide (polymorphic) smart constructors that construct from the fixed-length type – perfectly safe, though the actual dimension number is nowhere mentioned!
class VectorSpace v => FiniteDimensional v where
asFixVect :: v -> FixVect v (Scalar v)
instance FiniteDimensional Float where
asFixVect s = FixVect $ VU.singleton s
instance (FiniteDimensional a, FiniteDimensional b, Scalar a ~ Scalar b) => FiniteDimensional (a,b) where
asFixVect (a,b) = case (asFixVect a, asFixVect b) of
(FixVect av, FixVect bv) -> FixVect $ av<>bv
This construction from unboxed tuples is really inefficient, however this doesn't mean you can write efficient programs with this paradigm – if the dimension always stays constant, you only need to wrap and unwrap the once and can do all the critical operations through safe yet runtime-unchecked zips, folds and LA combinations.
Regardless, this approach isn't really widely used. Perhaps the single constant dimension is in fact too limiting for most relevant operations, and if you need to unwrap to tuples often it's way too inefficient. Another approach that is taking off these days is to actually tag the vectors with type-level numbers. Such numbers have become available in a usable form with the introduction of data kinds in GHC-7.4. Up until now, they're still rather unwieldy and not fit for proper arithmetic, but the upcoming 8.0 will greatly improve many aspects of this dependently-typed programming in Haskell.
A library that offers efficient length-indexed arrays is linear.

Actually generic length function in Haskell

length is generic on the input in that it accepts any Foldable.
genericLength is generic on the output in that it outputs any Integral.
Is there an actually fully generic function with type:
(Integral b, Foldable t) => t a -> b
Or even:
(Num b, Foldable t) => t a -> b
As even though length should only ever produce integer values, integers are also floats, so producing any number works just fine.
Currently I am just using:
foldl' (const (+ 1)) 0
I know one can be made fairly easily, but I also know that often the base functions are optimized in some way or another, so I was hoping there was an existing function for generically calculating length. If not my followup question is why? Specifically why doesn't genericLength accept a Foldable?
I mean this was pretty much answered in the comments. But just so this question is marked as answered the reason is basically that genericLength has terrible performance due to it supporting lazy natural numbers, and thus should pretty much be avoided. This performance issue is partially avoided via rewrite rules, but even with that it is still not a good function to use and thus it isn't worth it to attempt to generalize it further.

Convert String to Int checking for overflow

When I tried to convert a very long integer to Int, I was surprised that no error was thrown:
Prelude> read "123456789012345678901234567890" :: Int
-4362896299872285998
readMaybe from Text.Read module gives the same result.
Two questions:
Which function should I call to perform a safe conversion?
How can the most type safe language on Earth allow such unsafe things?
Update 1:
This is my attempt to write a version of read that checks bounds:
{-# LANGUAGE ScopedTypeVariables #-}
parseIntegral :: forall a . (Integral a, Bounded a) => String -> Maybe a
parseIntegral s = integerToIntegral (read s :: Integer) where
integerToIntegral n | n < fromIntegral (minBound :: a) = Nothing
integerToIntegral n | n > fromIntegral (maxBound :: a) = Nothing
integerToIntegral n = Just $ fromInteger n
Is it the best I can do?
Background: why unchecked overflow is actually wonderful
Haskell 98 leaves overflow behavior explicitly unspecified, which is good for implementers and bad for everyone else. Haskell 2010 discusses it in two sections—in the section inherited from Haskell 98, it's left explicitly unspecified, whereas in the sections on Data.Int and Data.Word, it is specified. This inconsistency will hopefully be resolved eventually.
GHC is kind enough to specify it explicitly:
All arithmetic is performed modulo 2^n, where n is the number of bits in the type.
This is an extremely useful specification. In particular, it guarantees that Int, Word, Int64, Word32, etc., form rings, and even principal ideal rings, under addition and multiplication. This means that arithmetic will always work right—you can transform expressions and equations in lots of different ways without breaking things. Throwing exceptions on overflow would break all these properties, making it much more difficult to write and reason about programs. The only times you really need to be careful are when you use comparison operators like < and compare—fixed width integers do not form ordered groups, so these operators are a bit touchy.
Why it makes sense not to check reads
Reading an integer involves many multiplications and additions. It also needs to be fast. Checking to make sure the read is "valid" is not so easy to do quickly. In particular, while it's easy to find out whether an addition has overflowed, it is not easy to find out whether a multiplication has. The only sensible ways I can think of to perform a checked read for Int are
Read as an Integer, check, then convert. Integer arithmetic is significantly more expensive than Int arithmetic. For smaller things, like Int16, the read can be done with Int, checking for Int16 overflow, then narrowed. This is cheaper, but still not free.
Compare the number in decimal to maxBound (or, for a negative number, minBound) while reading. This seems more likely to be reasonably efficient, but there will still be some cost. As the first section of this answer explains, there is nothing inherently wrong with overflow, so it's not clear that throwing an error is actually better than giving an answer modulo 2^n.
If isn't "unsafe", in that the behaviour of the problem isn't undefined. (It's perfectly defined, just probably not what you wanted.) For example, unsafeWriteAray is unsafe, in that if you make a mistake with it, it writes data into arbitrary memory locations, either causing your application to segfault, or merely making corrupt its own memory, causing it behave in arbitrary undefined ways.
So much for splitting hairs. If you want to deal with such huge numbers, Integer is the only way to go. But you probably knew that already.
As for why there's no overflow check... Sometimes you actually want a number to overflow. (E.g., you might convert to Word8 without explicitly ANDing out the bottom 8 bits.) At any rate, every possible arithmetic operation can potentially overflow (e.g., maxBound + 1 = minBound, and that's just normal addition.) Do you really want every single arithmetic operation to have an overflow check, slowing your program down at every step?
You get the exact same behaviour in C, C++ or C#. I guess the difference is, in C# we have the checked keyword, which allows you to automatically check for overflow. Maybe somebody has a Haskell package for doing checked arithmetic… For now, it's probably simpler to just implement this one check yourself.

Primitive types as expanded classes in Haskell?

In Eiffel one is allowed to use an expanded class which doesn't allocate from the heap. From a developer's perspective one rarely has to think about conversion from Int to Float as it is automatic. My question is this: Why did Haskell not choose a similar approach to modeling Num. Specifically, lets consider the Int instance. Here is the rationale for my question:
[1..3] = [1,2,3]
[1..3.5] = [1.0,2.0,3.0,4.0] -- rounds up
The second list was something that I was not expecting because there are by definition infinite floating point numbers between any two integers. Of course once we test the sequence it is clear that it is returning the floor of the floating point number rounded up. One of the reasons these conversions are needed is allow us to compute mean of a set of Integers for example.
In Eiffel the number type hierarchy is a bit more programmer friendly and the conversion happens as needed: for example creating a sequence can still be a set of Ints that result in a floating point mean. This has a readability advantage.
Is there a reason that expanded class was not implemented in Haskell?Any references will greatly help.
#ony: the point about parallel strategies: wont we face the same issue when using primitives? The manual does discourage using primitives and that makes sense to me in general where ever we can use primitives we probably need to use the abstract type. The issue I faced when trying to a mean of numbers is the missing Fractional Int instance and as to why does 5/3 not promote to a floating point instead of having to create floating point array to achieve the same result. There must be a reason as to why Fractional instance of Int and Integer is not defined? That could help me understand the rationale better.
#leftroundabout: the question is not about expanded classes per se but the convenience that such a feature can offer although that feature alone is not sufficient to handle the type promotion to float from an int for example as mentioned in my response to #ony. Lets take the classic example of a mean and try to define it as
> [Int] :: Double
let mean xs = sum xs / length xs (--not valid haskell code)
[Int] :: Double
let mean = sum xs / fromIntegral (length xs)
I would have liked it if I did not have to call the fromIntegral to get the mean function to work and that ties to the missing Fractional Int. Although the explanation seems to make sense, it has to, what I dont understand is if I am clear that I expect a double and I state it in my type signature is that not sufficient to do the appropriate conversion?
[a..b] is shorthand for enumFromTo a b, a method of the Enum typeclass. It begins at a and succs until the first time b is exceeded.
[a,b..c] is shorthand for enumFromThenTo a b c is similar to enumFromTo except instead of succing it adds the difference b-a each time. By default this difference is computed by roundtripping through Int so fractional differences may or may not be respected. That said, Double works as you'd expect
Prelude> [0.0, 0.5.. 10]
[0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0,5.5,6.0,6.5,7.0,7.5,8.0,8.5,9.0,9.5,10.0]
[a..] is shorthand for enumFrom a which just succs forever.
[a,b..] is shorthand for enumFromThen a b which just adds (b-a) forever.
As for behaviour #J.Abrahamson already replied. That's definition enumFromThenTo.
As for design...
Actually GHC have Float# that represents unboxed type (can be allocated anywhere, but value is strict).
Since Haskell is a lazy language it assumes that most of the values are not required initially, until they actually referred with a primitive with a strict arguments.
Consider length [2..10]. In this case without optimization Haskell may even avoid generation of numbers and simply build up a list (without values). Probably more useful example takeWhile (<100) [x*(x-1) | x <- [2..]].
But you shouldn't think that we have overhead here since you are writing in language that abstracts away all that stuff with thumbs (except of strict notation). Haskell compiler have to take this as a work for itself. I.e. when compiler will be able to tell that all elements of list will be referenced (transformed to normal form) and it decides to process it within one stack of returns it can allocate it on stack.
Also with such approach you can gain more out of your code by using multiple CPU cores. Imagine that using Strategies your list being processed on a different cores and thus they should share common data on heap (not on stack).

Type algebra and Knuth's up arrow notation

Reading through this question and this blog post got me thinking more about type algebra and specifically how to abuse it.
Basically,
1) We can think of the Either A B type as addition: A+B
2) We can think of the ordered pair (A,B) as multiplication: A*B
3) We can think of the function A -> B as exponentiation: B^A
There's an obvious pattern going on here: Multiplication is repeated addition, and exponentiation is repeated multiplication. This led Knuth to define the up arrow ↑ as exponentiation, ↑↑ as repeated exponentiation, ↑↑↑ as repeated ↑↑, and so on. Thus, 10↑↑↑↑10 is a HUGE number.
My question is: how can the function ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ be represented in algebraic data
types? It seems like ↑ should be a function with an infinitite number of arguments, but that doesn't make much sense. Would A↑B simply be [A] -> B and thus A↑↑↑↑B be [[[[A]]]]->B?
Bonus points if you can explain what the Ackerman function would look like, or any of the other hypergrowth functions.
At the most obvious level, you could identify a↑↑b with
((...(a -> a) -> ...) -> a) -- iterated b times
and a↑↑↑b is just
(a↑↑(a↑↑(...(a↑↑(a↑↑a))...))) -- iterated b times
so everything can be expressed in terms of some long function type (hence as some immensely long tuple type ...). But I don't think there's a convenient expression for an arbitrary up-arrow symbol in terms of (the cardinality of) familiar Haskell types (beyond the ones written above with ... or ↑), since I can't think of any common mathematical objects that have larger-than-exponential combinatorial dependencies on the size of the underlying sets (without going to recursive datatypes, which are too big) ... maybe there are some such objects in combinatorial set theory? (Your question seems [to me] more about the sizes of sets than anything specific to types.)
(The Wikipedia page you linked already connects these objects to the Ackermann function.)

Resources