I made a Hedgehog generator that generates arbitrary 256-bit values in the following way:
genWord256 :: Gen Word256
genWord256 = do
bytes <- Gen.integral (Range.linear 0 31)
let lo = 2 ^ (8 * bytes)
hi = 2 ^ (8 * (bytes + 1))
pred <$> Gen.integral (Range.constant lo hi)
Making the size parameter determine the number of bytes in the number, I think, makes sense to my application. However, evaluating how this generator shrinks, and applying ceiling . logBase 2 to this, my question is this:
Why does Hedgehog decide to emphasise on the vicinity of its initial outcome? Have I somehow misunderstood what is meant by "a range which is unaffected by the size parameter"? (Range.constant) I would have thought that any shrinkage here must have a smaller number of bits.
λ> Gen.print genWord256
=== Outcome ===
68126922926972638
=== Shrinks ===
112 -- 7 bits
4035711763 -- 32 bits
106639875637011 -- 47 bits
281474976710655 -- 48 bits
34204198951841647 -- 55 bits
51165560939407143 -- 56 bits
59646241933189891 -- ...
67994412286444783 -- ...
... 50 shrinks omitted ...
68126922926972637 -- 56 bits
The output that you showed makes perfect sense to me.
First and foremost, just to make sure that we are on the same page, what Gen.print shows is not a sequence of consequitive shrinks assuming failure, it is just the first level of the shrink tree.
So, in your example, it generated 68126922926972638, which is a 7-byte value. Assuming that this fails, it will try 112, a 1-byte value. This is as small as possible, given that it corresponds to the value 0 at your first generator. If this test fails too, it will move to the second level of the shrink tree, which we don’t see, but it would be reasonable to assume that it will focus on 1-byte values and try to shrink them towards lo.
However, if the test at 112 succeeds, it will move to the second value on your list, 4035711763, which is a 4-byte value. And note that 4 bytes correspond to 3 in your first generator, which is right in the middle between 0 and 6, because it is doing binary search on the size of your inputs. If the test succeeds again, it will continue moving closer to the original outcome, that is “emphasise on the vicinity of its initial outcome”, and that is what we see in your output.
Related
I am trying to understand exactly how programming in Lua can change the state of I/O's with a Modbus I/O module. I have read the modbus protocol and understand the registers, coils, and how a read/write string should look. But right now, I am trying to grasp how I can manipulate the read/write bit(s) and how functions can perform these actions. I know I may be very vague right now, but hopefully the following functions, along with some questions throughout them, will help me better convey where I am having the disconnect. It has been a very long time since I've first learned about bit/byte manipulation.
local funcCodes = { --[[I understand this part]]
readCoil = 1,
readInput = 2,
readHoldingReg = 3,
readInputReg = 4,
writeCoil = 5,
presetSingleReg = 6,
writeMultipleCoils = 15,
presetMultipleReg = 16
}
local function toTwoByte(value)
return string.char(value / 255, value % 255) --[[why do both of these to the same value??]]
end
local function readInputs(s)
local s = mperia.net.connect(host, port)
s:set_timeout(0.1)
local req = string.char(0,0,0,0,0,6,unitId,2,0,0,0,6)
local req = toTwoByte(0) .. toTwoByte(0) .. toTwoByte(6) ..
string.char(unitId, funcCodes.readInput)..toTwoByte(0) ..toTwoByte(8)
s:write(req)
local res = s:read(10)
s:close()
if res:byte(10) then
local out = {}
for i = 1,8 do
local statusBit = bit32.rshift(res:byte(10), i - 1) --[[What is bit32.rshift actually doing to the string? and the same is true for the next line with bit32.band.
out[#out + 1] = bit32.band(statusBit, 1)
end
for i = 1,5 do
tDT.value["return_low"] = tostring(out[1])
tDT.value["return_high"] = tostring(out[2])
tDT.value["sensor1_on"] = tostring(out[3])
tDT.value["sensor2_on"] = tostring(out[4])
tDT.value["sensor3_on"] = tostring(out[5])
tDT.value["sensor4_on"] = tostring(out[6])
tDT.value["sensor5_on"] = tostring(out[7])
tDT.value[""] = tostring(out[8])
end
end
return tDT
end
If I need to be a more specific with my questions, I'll certainly try. But right now I'm having a hard time connecting the dots with what is actually going on to the bit/byte manipulation here. I've read both books on the bit32 library and sources online, but still don't know what these are really doing. I hope that with these examples, I can get some clarification.
Cheers!
--[[why do both of these to the same value??]]
There are two different values here: value / 255 and value % 255. The "/" operator represents divison, and the "%" operator represents (basically) taking the remainder of division.
Before proceeding, I'm going to point out that 255 here should almost certainly be 256, so let's make that correction before proceeding. The reason for this correction should become clear soon.
Let's look at an example.
value = 1000
print(value / 256) -- 3.90625
print(value % 256) -- 232
Whoops! There was another problem. string.char wants integers (in the range of 0 to 255 -- which has 256 distinct values counting 0), and we may be given it a non-integer. Let's fix that problem:
value = 1000
print(math.floor(value / 256)) -- 3
-- in Lua 5.3, you could also use value // 256 to mean the same thing
print(value % 256) -- 232
What have we done here? Let's look 1000 in binary. Since we are working with two-byte values, and each byte is 8 bits, I'll include 16 bits: 0b0000001111101000. (0b is a prefix that is sometimes used to indicate that the following number should be interpreted as binary.) If we split this into the first 8 bits and the second 8 bits, we get: 0b00000011 and 0b11101000. What are these numbers?
print(tonumber("00000011",2)) -- 3
print(tonumber("11101000",2)) -- 232
So what we have done is split a 2-byte number into two 1-byte numbers. So why does this work? Let's go back to base 10 for a moment. Suppose we have a four-digit number, say 1234, and we want to split it into two two-digit numbers. Well, the quotient 1234 / 100 is 12, and the remainder of that divison is 34. In Lua, that's:
print(math.floor(1234 / 100)) -- 12
print(1234 % 100) -- 34
Hopefully, you can understand what's happening in base 10 pretty well. (More math here is outside the scope of this answer.) Well, what about 256? 256 is 2 to the power of 8. And there are 8 bits in a byte. In binary, 256 is 0b100000000 -- it's a 1 followed by a bunch of zeros. That means it a similar ability to split binary numbers apart as 100 did in base 10.
Another thing to note here is the concept of endianness. Which should come first, the 3 or the 232? It turns out that different computers (and different protocols) have different answers for this question. I don't know what is correct in your case, you'll have to refer to your documentation. The way you are currently set up is called "big endian" because the big part of the number comes first.
--[[What is bit32.rshift actually doing to the string? and the same is true for the next line with bit32.band.]]
Let's look at this whole loop:
local out = {}
for i = 1,8 do
local statusBit = bit32.rshift(res:byte(10), i - 1)
out[#out + 1] = bit32.band(statusBit, 1)
end
And let's pick a concrete number for the sake of example, say, 0b01100111. First let's lookat the band (which is short for "bitwise and"). What does this mean? It means line up the two numbers and see where two 1's occur in the same place.
01100111
band 00000001
-------------
00000001
Notice first that I've put a bunch of 0's in front of the one. Preceeding zeros don't change the value of the number, but I want all 8 bits for both numbers so that I can check each digit (bit) of the first number with each digit of the second number. In each place where there both numbers had a 1 (the top number had a 1 "and" the bottom number had a 1), I put a 1 for the result, otherwise I put 0. That's bitwise and.
When we bitwise and with 0b00000001 as we did here, you should be able to see that we will only get a 1 (0b00000001) or a 0 (0b00000000) as the result. Which we get depends on the last bit of the other number. We have basically separated out the last bit of that number from the rest (which is often called "masking") and stored it in our out array.
Now what about the rshift ("right shift")? To shift right by one, we discard the rightmost digit, and move everything else over one space the the right. (At the left, we usually add a 0 so we still have 8 bits ... as usual, adding a bit in front of a number doesn't change it.)
right shift 01100111
\\\\\\\\
0110011 ... 1 <-- discarded
(Forgive my horrible ASCII art.) So shifting right by 1 changes our 0b01100111 to 0b00110011. (You can also think of this as chopping off the last bit.)
Now what does it mean to shift right be a different number? Well to shift by zero does not change the number. To shift by more than one, we just repeat this operation however many times we are shifting by. (To shift by two, shift by one twice, etc.) (If you prefer to think in terms of chopping, right shift by x is chopping off the last x bits.)
So on the first iteration through the loop, the number will not be shifted, and we will store the rightmost bit.
On the second iteration through the loop, the number will be shifted by 1, and the new rightmost bit will be what was previously the second from the right, so the bitwise and will mask out that bit and we will store it.
On the next iteration, we will shift by 2, so the rightmost bit will be the one that was originally third from the right, so the bitwise and will mask out that bit and store it.
On each iteration, we store the next bit.
Since we are working with a byte, there are only 8 bits, so after 8 iterations through the loop, we will have stored the value of each bit into our table. This is what the table should look like in our example:
out = {1,1,1,0,0,1,1,0}
Notice that the bits are reversed from how we wrote them 0b01100111 because we started looking from the right side of the binary number, but things are added to the table starting on the left.
In your case, it looks like each bit has a distinct meaning. For example, a 1 in the third bit could mean that sensor1 was on and a 0 in the third bit could mean that sensor1 was off. Eight different pieces of information like this were packed together to make it more efficient to transmit them over some channel. The loop separates them again into a form that is easy for you to use.
As I understand it, Haskell only garbage collects when something goes out of scope, so a top level binding will only be evaluated once and never go out of scope. So if I run this code in GHCI, the first 50 elements will be evaluated and saved.
let xs = map f [0..]
take 50 xs
My questions is what happens when I execute the following snippet: xs !! 99. What does the garbage collector save? Does it
Keep results for indexes 0 - 49, thunk for indexes 50 - 98, result for index 99, thunk for indexes 100+
Keep results for indexes 0 - 49, thunk for indexes 50+
Keep results for indexes 0 - 99, thunk for indexes 100+
Haskell lists are linked lists made up of (:) ("cons") cells and terminated by a [] ("nil") value. I'll draw such cells like this
[x] -> (tail - remainder of list)
|
v
(head - a value)
So when thinking about what is evaluated, there are two pieces to consider. First is the spine, that is the structure of cons cells, and second is the values the list contains. Instead of 50 and 99, let's use 2 and 4, respectively.
ghci> take 2 xs
[0,1]
Printing this list forces the evaluation of the first two cons cells as well as the values within them. So your list looks like this:
[x] -> [x] -> (thunk)
| |
v v
0 1
Now, when we
ghci> xs !! 4
3
we haven't demanded the 2nd or 3rd values, but we need to evaluate those cons cells to get to the 4th element. So we have forced the spine all the way up to the 4th element, but we have only evaluated the 4th value, so the list now looks like this:
[x] -> [x] -> [x] -> [x] -> [x] -> (thunk)
| | | | |
v v v v v
0 1 (thunk) (thunk) 4
Nothing in this picture will be garbage-collected. However, sometimes those thunks can take up a lot of space or reference something large, and evaluating them to a plain value will allow those resources to be freed. See this answer for a small discussion of those subtleties.
Let's ask the profiler.
We will compile the following example program that should do approximately what your GHCI session did. It's important that we print the results, like GHCI did, as this forces the computations.
f x = (-x)
xs = map f [0..]
main = do
print (take 50 xs)
print (xs !! 99)
I saved mine as example.hs. We'll compile it with options to enable profiling
ghc -prof -fprof-auto -rtsopts example.hs
Time profile
We can find out how many times f was applied with a time profile.
profile +RTS -p
This produces an output file named example.prof, the following is the interesting portion:
COST CENTRE MODULE no. entries
...
f Main 78 51
We can see that f was evaluated 51 times, 50 times for the print (take 50 xs) and once for print (xs !! 99). Therefore we can rule out your third possibility, since f was only evaluated 51 times, there can't be results for all of the indices 0-99
Keep results for indexes 0 - 99, thunk for indexes 100+
Heap profile of results
Profiling the memory on the heap is a bit trickier. The heap profiler takes samples, by default once every .1 seconds. Our program will run so fast that the heap profiler won't take any samples while it's running. We'll add a spin to our program so that the heap profiler will get a chance to take a sample. The following will spin for a number of seconds.
import Data.Time.Clock
spin :: Real a => a -> IO ()
spin seconds =
do
startTime <- getCurrentTime
let endTime = addUTCTime (fromRational (toRational seconds)) startTime
let go = do
now <- getCurrentTime
if now < endTime then go else return ()
go
We don't want the garbage collector to collect the data while the program is running, so we'll add another usage of xs after the spin.
main = do
print (take 50 xs)
print (xs !! 99)
spin 1
print (xs !! 0)
We'll run this with the default heap profiling option, which groups memory usage by cost center.
example +RTS -h
This produces the file example.hp. We'll pull a sample out of the middle of the file where the numbers are stable (while it was in a spin).
BEGIN_SAMPLE 0.57
(42)PINNED 32720
(48)Data.Time.Clock.POSIX.CAF 48
(85)spin.endTime/spin/mai... 56
(84)spin.go/spin/main/Mai... 64
(81)xs/Main.CAF 4848
(82)f/xs/Main.CAF 816
(80)main/Main.CAF 160
(64)GHC.IO.Encoding.CAF 72
(68)GHC.IO.Encoding.CodeP... 136
(57)GHC.IO.Handle.FD.CAF 712
(47)Main.CAF 96
END_SAMPLE 0.57
We can see that f has produced 816 bytes of memory. For "small" Integers, an Integer consumes 2 word of memory. On my system, a word is 8 bytes of memory, so a "small" Integer takes 16 bytes. Therefore 816/16 = 51 of the Integers produced by f are probably still in memory.
We can check that all of this memory is actually being used for "small" Integers by asking for a profile by closure description with -hd. We can't both group memory usage by closure descripiton and cost-centre, but we can restrict the profiling to a single cost-centre with -hc, in this case, we are interested in the f cost-centre
example +RTS -hd -hcf
This says that all 816 bytes due to f are being used by S#, the constructor for "small" Integers
BEGIN_SAMPLE 0.57
S# 816
END_SAMPLE 0.57
We can certainly remove the following, since 51 Integer results are being kept, and it expects to only keep 50 Integers
Keep results for indexes 0 - 49, thunk for indexes 50+
Heap profile of structure and thunks
This leaves us with option
Keep results for indexes 0 - 49, thunks for indexes 50 - 98, result for index 99, thunk for indexes 100+
Let's guess how much memory this situation would be consuming.
In general, Haskell data types require 1 word of memory for the constructor, and 1 word for each field. The [] type's : constructor has two fields, so it should take 3 words of memory, or 24 bytes. 100 :s would then take 2400 bytes of memory. We will see that this is exactly correct when we ask about the closure description for xs.
It's difficult to reason about the size of thunks, but we'll give it a try. There would be 49 thunks for the values of indices [50, 98]. Each of these thunks is probably holding the Integer from the generator [0..]. It also needs to hold the structure of a thunk, which unfortunately changes when profiling. There will also be a thunk for the remainder of a the list. It will need the Integer from which the remainder of the list is generated, and the structure of a thunk.
Asking for a memory breakdown by closure description for the xs cost-centre
example +RTS -hd -hcxs
Gives us the following sample
BEGIN_SAMPLE 0.60
<base:GHC.Enum.sat_s34b> 32
<base:GHC.Base.sat_s1be> 32
<main:Main.sat_s1w0> 16
S# 800
<base:GHC.Base.sat_s1bd> 1568
: 2400
END_SAMPLE 0.60
We were exactly correct about there being 100 :s requiring 2400 bytes of memory. There are 49+1 = 50 "small" Integers S# occupying 800 bytes for the thunks for the 49 uncalculated values and the thunk for the remainder of the lists. There are 1568 bytes which are probably the 49 thunks for the uncalculated values, which would each then be 32 bytes or 4 words. There are another 80 bytes we can't exactly explain that are left over for the thunk for the remainder of the list.
Both memory and time profiles are consistent with our belief that the program will
Keep results for indexes 0 - 49, thunks for indexes 50 - 98, result for index 99, thunk for indexes 100+
Found this code on http://www.docjar.com/html/api/java/util/HashMap.java.html after searching for a HashMap implementation.
264 static int hash(int h) {
265 // This function ensures that hashCodes that differ only by
266 // constant multiples at each bit position have a bounded
267 // number of collisions (approximately 8 at default load factor).
268 h ^= (h >>> 20) ^ (h >>> 12);
269 return h ^ (h >>> 7) ^ (h >>> 4);
270 }
Can someone shed some light on this? The comment tells us why this code is here but I would like to understand how this improves a bad hash value and how it guarantees that the positions have bounded number of collisions. What do these magic numbers mean?
In order for it to make any sense it has to be combined with an understanding of how HashMap allocates things in to buckets. This is the trivial function by which a bucket index is chosen:
static int indexFor(int h, int length) {
return h & (length-1);
}
So you can see, that with a default table size of 16, only the 4 least significant bits of the hash actually matter for allocating buckets! (16 - 1 = 15, which masks the hash by 1111b)
This could clearly be bad news if your hashCode function returned:
10101100110101010101111010111111
01111100010111011001111010111111
11000000010100000001111010111111
//etc etc etc
Such a hash function would not likely be "bad" in any way that is visible to its author. But if you combine it with the way the map allocates buckets, boom, MapFail(tm).
If you keep in mind that h is a 32 bit number, those are not magic numbers at all. It is systematically xoring the most significant bits of the number rightward into the least significant bits. The purpose is so that "differences" in the number that occur anywhere "across" it when viewed in binary become visible down in the least significant bits.
Collisions become bounded because the number of different numbers that have the same relevant LSBs is now significantly bounded because any differences that occur anywhere in the binary representation are compressed into the bits that matter for bucket-ing.
Based on the code I found in linux/include/linux/jiffies.h:
41 #define time_after(a,b) \
42 (typecheck(unsigned long, a) && \
43 typecheck(unsigned long, b) && \
44 ((long)(b) - (long)(a) < 0))
It seems to me that there is no kind of wrap around monitoring involved. So, if the jiffies(a) were to wrap around and get back fairly close to the timeout(b), then the result would be "false" when it is actually "true".
Let's use some fairly small numbers for this example. Say, time_after(110,150) where 110 is jiffies and 150 is the timeout. The result will clearly be false - whether jiffies wrapped around or not: 150-110 is always > 0.
So, I just wanted to confirm that I didn't miss something and that is indeed how it is.
Just to be clear, in your example, because 110 is not after 150, time_after(110,150) should (and does) return false. From the comment:
time_after(a,b) returns true if the time a is after time b.
Also, note that the code does indeed handle wrapping around to 0. To make the following a bit easier to understand, I'll use unsigned and signed one-byte values, i.e. 8-bit 2's complement. But the argument is general.
Suppose b is 253, and five ticks later jiffies has wrapped around to 2. We would therefore expect time_after(2,253) to return true. And it does (using int8_t to denote a signed 8-bit value):
(int8_t) 253 - (int8_t) 2 == -3 - 2 == -5 < 0
You can try other values, too. This one is trickier, for time_after(128, 127), which should be true as well:
(int8_t) 127 - (int8_t) 128 == 127 - (-128) == 255 == -1 (for 8-bit 2's complement) < 0
In reality the type of the expression (int8_t) 127 - (int8_t) 128 would be an int, and the value really would be 255. But using longs the expression type would be long and the equivalent example would be, for time_after( 2147483648, 2147483647):
(long) 2147483647 - (long) 2147483648 == 2147483647 - (-2147483648) == 4294967295 == -1 < 0
Eventually, after wrapping around, the "after" jiffies value a will begin to catch up with the before value b, and time_after(a,b) will report false. For N-bit 2's complement, this happens when a is 2^(N-1) ticks later than b. For N=8, that happens when a is 128 ticks after b. For N=32, that's 2147483648 ticks, or (with 1 ms ticks) about 25 days.
For the mathematically inclined, I believe that in general time_after(a,b) returns true iff the least residue (modulo 2^N) of (a-b) is > 0 and < 2^(N-1).
From nearby in the same file:
/*
* Have the 32 bit jiffies value wrap 5 minutes after boot
* so jiffies wrap bugs show up earlier.
*/
#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-300*HZ))
One would hope this means it's pretty well tested.
Say I have a Float. I want the first 32 bits of the fractional part of this Float? Specifically, I am looking at getting this part of the sha256 pseudo-code working (from the wikipedia)
# Note 1: All variables are unsigned 32 bits and wrap modulo 232 when calculating
# Note 2: All constants in this pseudo code are in big endian
# Initialize variables
# (first 32 bits of the fractional parts of the square roots of the first 8 primes 2..19):
h[0..7] := 0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a, 0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19
I naively tried doing floor (((sqrt 2) - 1) * 2^32), and then converting the returned integer to a Word32. This does not at all appear to be the right answer. I figured that by multipling by 2^32 power, I was effectively left shifting it 32 places (after the floor). Obviously, not the case. Anyway, long and the short of it is, how do I generate h[0..7] ?
The best way to get h[0..7] is to copy the hex constants from the Wikipedia page. That way you know you will have the correct ones.
But if you really want to compute them:
scaledFrac :: Integer -> Integer
scaledFrac x =
let s = sqrt (fromIntegral x) :: Double
in floor ((s - fromInteger (floor s)) * 2^32)
[ printf "%x" (scaledFrac i) | i <- [2,3,5,7,11,13,17,19] ]