I am going through LDD from Rubini to learn driver programming.Currently, I am going through 3rd chapter - writing character driver "scull". However, In the example code provided by the authors, I am not able to understand the following lines in scull_read() and scull_write() methods :
item = (long)*f_pos / itemsize;
rest = (long)*f_pos % itemsize;
s_pos = rest / quantum;
q_pos = rest % quantum;
I have spent quite a time on it in vain( and still working on it) . Can someone please help me understand the functionality of the above code snippet??
Regards,
Roy
Suppose you have set quantum area size to 4000 bytes in scull driver and qset array size to 10. In that case, value of itemsize would be 40000. f_pos is a position from where read/write should start, which is coming as a parameter to read/write function. suppose read request has come and f_pos is 50000.
Now,
item = (long)*f_pos / itemsize; so item would be 50000/40000 = 1
rest = (long)*f_pos % itemsize; so rest would be 50000%40000 = 10000
s_pos = rest / quantum; so s_pos would be 10000/4000 = 2
q_pos = rest % quantum; so q_pos would be 10000%4000 = 2000
If you have read description of scull driver in chapter 3 carefully then each scull device is a linked list of pointers (of scull_qset) and in our case each scull_qset points to array of pointers which points to quantum area of 4000 bytes as we have set quantum area size 4000 bytes and array size in our case is 10. So, our each scull_qset is an array of 10 pointers and each pointer points to 4000 bytes. So, one scull_qset has capacity of 40000 bytes.
In our read request, f_pos is 50000, so obviously this position would not be in first scull_qset which is proven by calculation of item. As item is 1, it will point to second scull_qset(value of item would be 0 for first scull_qset, for more information see scull_follow function definition).
Value of rest will help to find out at which position in second scull_qset read should start. As each quantum area is of 4000 bytes, s_pos gives out of 10 pointers of second scull_qset which pointer should be used and qset tells that in a particular quantum area pointed by pointer found in s_pos, at which particular location read should start.
Related
I have a ADC-device which is connected with USB to my Google Coral Dev Board. The device sends 2080 bytes of 16 bit sensor data with a serial port each ms to my Dev Board. There I read that data with pyserial and convert it to integers with the code below. The line commented with "Conversion" is the line, which converts the byte data to integer. I dont exactly understand this line but it was provided by the company who gave me the device.
So I got the problem that my Dev Board is working too slow with the conversion from byte to int. It takes around 2.3 seconds to convert 1 second of data and because the system should work in real time (the data is later processed by an AI model), obviously this is not working like this.
I measured the processing time of partions of that code and the bottleneck is the bitwise operations which take around 2 ms for 1 ms of data.
Is there a way to speed up the process? Actually I am not even sure if there is a problem with my Dev Board or if it is normal that it works so "slow" like this.
def fill_buffer(self):
total_data_per_ms_int = np.zeros(shape=[1040])
for i in range(0, self.buffer_time): #One iteration means 1 ms of data, 2080 bytes per iteration -> ae: 1000 bytes, vib_xyz: je 10 bytes, temp: 10 bytes
total_data_per_ms_bytes = self.ser.read(2080)
z = 0
for j in range(0, 2080, 2):
value_int = total_data_per_ms_bytes[j+1]<<8 | total_data_per_ms_bytes[j] #Conversion from byte to int
total_data_per_ms_int[z] = value_int
z = z+1
self.sensor1[0+i*1000:1000+i*1000] = (total_data_per_ms_int[0:1000]) #This part is for dividing the data which has a specific type to 5 different arrays
self.sensor2[0+i*10:10+i*10] = (total_data_per_ms_int[1000:1040:4]) #not important for understanding the problem
self.sensor3[0+i*10:10+i*10] = (total_data_per_ms_int[1001:1040:4])
self.sensor4[0+i*10:10+i*10] = (total_data_per_ms_int[1002:1040:4])
self.sensor5[0+i*10:10+i*10] = (total_data_per_ms_int[1003:1040:4])
I am trying to calculate space required by a dataset using below formula, but I am getting wrong somewhere when I cross check it with the existing dataset in the system. Please help me
1st Dataset:
Record format . . . : VB
Record length . . . : 445
Block size . . . . : 32760
Number of records....: 51560
Using below formula to calculate
optimal block length (OBL) = 32760/record length = 32760/449 = 73
As there are two blocks on the track, hence (TOBL) = 2 * OBL = 73*2 = 146
Find number of physical records (PR) = Number of records/TOBL = 51560/146 = 354
Number of tracks = PR/2 = 354/2 = 177
But I can below in the dataset information
Current Allocation
Allocated tracks . : 100
Allocated extents . : 1
Current Utilization
Used tracks . . . . : 100
Used extents . . . : 1
2nd Dataset :
Record format . . . : VB
Record length . . . : 445
Block size . . . . : 27998
Number of Records....: 127,252
Using below formula to calculate
optimal block length (OBL) = 27998/record length = 27998/449 = 63
As there are two blocks on the track, hence (TOBL) = 2 * OBL = 63*2 = 126
Find number of physical records (PR) = Number of records/TOBL = 127252/126 = 1010
Number of tracks = PR/2 = 1010/2 = 505
Number of Cylinders = 505/15 = 34
But I can below in the dataset information
Current Allocation
Allocated cylinders : 69
Allocated extents . : 1
Current Utilization
Used cylinders . . : 69
Used extents . . . : 1
A few observations on your approach.
First, since your dealing with records that are variable length it would be helpful to know the "average" record length as that would help to formulate a more accurate prediction of storage. Your approach assumes a worst case scenario of all records being at maximum which is fine for planning purposes but in reality you'll likely see the actual allocation would be lower if the average of the record lengths is lower than the maximum.
The approach you are taking is reasonable but consider that you can inform z/OS of the space requirements in blocks, records, DASD geometry or let DFSMS perform the calculation on your behalf. Refer to this article to get some additional information on options.
Back to your calculations:
You Optimum Block Length (OBL) is really a records per block (RPB) number. Block size divided maximum record length yields the number of records at full length that can be stored in the block. If your average record length is less then you can store more records per block.
The assumption of two blocks per track may be true for your situation but it depends on the actual device type that will be used for the underlying allocation. Here is a link to some of the geometries for supported DASD devices and their geometries.
Your assumption of two blocks per track depends on the device is not correct for 3390's as you would need 64k for two blocks on a track but as you can see the 3390's max out at 56k so you would only get one block per track on the device.
Also, it looks like you did factor in the RDW by adding 4 bytes but someone looking at the question might be confused if they are not familiar with V records on z/OS.In the case of your calculation that would be 61 records per block at 27998 (which is the "optimal block length" so two blocks can fit comfortable on a track).
I'll use the following values:
MaximumRecordLength = RecordLength + 4 for RDW
TotalRecords = Total Records at Maximum Length (worst case)
BlockSize = modeled blocksize
RecordsPerBlock = number of records that can fit in a block (worst case)
BlocksNeeded = number of blocks needed to contain estimated records (worst case)
BlocksPerTrack = from IBM device geometry information
TracksNeeded = TotalRecords / RecordsPerBlock / BlocksPerTrack
Cylinders = Device Tracks per cylinder (15 for most devices)
Example 1:
Total Records = 51,560
BlockSize = 32,760
BlocksPerTrack = 1 (from device table)
RecordsPerBlock: 32,760 / 449 = 72.96 (72)
Total Blocks = 51,560 / 72 = 716.11 (717)
Total Tracks = 717 * 1 = 717
Cylinders = 717 / 15 = 47.8 (48)
Example 2:
Total Records = 127,252
BlockSize = 27,998
BlocksPerTrack = 2 (from device table)
RecordsPerBlock: 27,998 / 449 = 62.35 (62)
Total Blocks = 127,252 / 62 = 2052.45 (2,053)
Total Tracks = 2,053 / 2 = 1,026.5 (1,027)
Cylinders = 1027 / 15 = 68.5 (69)
Now, as to the actual allocation. It depends on how you allocated the space, the size of the records. Assuming it was in JCL you could use the RLSE subparameter of the SPACE= to release space when the is created and closed. This should release unused resources.
Given that the records are Variable the estimates are worst case and you would need to know more about the average record lengths to understand the actual allocation in terms of actual space used.
Final thought, all of the work you're doing can be overridden by your storage administrator through ACS routines. I believe that most people today would specify a BLKSIZE=0 and let DFSMS do all of the hard work because that component has more information about where a file will go, what the underlying devices are and the most efficient way of doing the allocation. The days of disk geometry and allocation are more of a campfire story unless your environment has not been administered to do these things for you.
Instead of trying to calculate tracks or cylinders, go for MBs, or KBs. z/OS (DFSMS) will calculate for you, how many tracks or cylinders are needed.
In JCL it is not straight forward but also not too complicated, once you got it.
There is a DD statement parameter called AVGREC=, which is the trigger. Let me do an example for your first case above:
//anydd DD DISP=(NEW,CATLG),
// DSN=your.new.data.set.name,
// REFCM=VB,LRECL=445,
// SPACE=(445,(51560,1000)),AVGREC=U
//* | | | |
//* V V V V
//* (1) (2) (3) (4)
Parameter AVGREC=U (4) tells the system three things:
Firstly, the first subparameter in SPACE= (1) shall be interpreted as an average record length. (Note that this value is completely independend of the value specified in LRECL=.)
Secondly, it tells the system, that the second (2), and third (3) SPACE= subparameter are the number of records of average length (1) that the data set shall be able to store.
Thirdly, it tells the system that numbers (2), and (3) are in records (AVGREC=U). Alternatives are thousands (AVGREC=M), and millions (AVGREC=M).
So, this DD statement will allocate enough space to hold the estimated number of records. You don't have to care for track capacity, block capacity, device geometry, etc.
Given the number of records you expect and the (average) record length, you can easily calculate the number of kilobytes or megabytes you need. Unfortunately, you cannot directly specify KB, or MB in JCL, but there is a way using AVGREC= as follows.
Your first data set will get 51560 records of (maximum) length 445, i.e. 22'944'200 bytes, or ~22'945 KB, or ~23 MB. The JCL for an allocation in KB looks like this:
//anydd DD DISP=(NEW,CATLG),
// DSN=your.new.data.set.name,
// REFCM=VB,LRECL=445,
// SPACE=(1,(22945,10000)),AVGREC=K
//* | | | |
//* V V V V
//* (1) (2) (3) (4)
You want the system to allocate primary space for 22945 (2) thousands (4) records of length 1 byte (1), which is 22945 KB, and secondary space for 10'000 (3) thousands (4) records of length 1 byte (1), i.e. 10'000 KB.
Now the same alloation specifying MB:
//anydd DD DISP=(NEW,CATLG),
// DSN=your.new.data.set.name,
// REFCM=VB,LRECL=445,
// SPACE=(1,(23,10)),AVGREC=M
//* | | | |
//* V V V V
//* (1) (2)(3) (4)
You want the system to allocate primary space for 23 (2) millions (4) records of length 1 byte (1), which is 23 MB, and secondary space for 10 (3) millions (4) records of length 1 byte (1), i.e. 10 MB.
I rarely use anything other than the latter.
In ISPF, it is even easier: Data Set Allocation (3.2) allows KB, and MB as space units (amongst all the old ones).
A useful and usually simpler alternative to using SPACE and AVGREC etc is to simply use a DATACLAS for space if your site has appropriate sized ones defined. If you look at ISMF Option 4 you can list available DATACLAS's and see what space values etc they provide. You'd expect to see a number of ranges in size, and some with or without Extended Format and/or Compression. Even if a DATACLAS overallocates a bit then it is likely the overallocated space will be released by the MGMTCLAS assigned to the dataset at close or during space management. And you do have an option to code DATACLAS AND SPACE in which case any coded space (or other) value will override the DATACLAS, which helps with exceptions. It still depends how your Storage Admin's have coded the ACS routines but generally Users are allowed to specify a DATACLAS and it will be honored by the ACS routines.
For basic dataset size calculation I just use LRECL times the expected Max Record Count divided by 1000 a couple of times to get a rough MB figure. Obviously variable records/blks add 4bytes each for RDW and/or BDW but unless the number of records is massive or DASD is extremely tight for space wise it shouldn't be significant enough to matter.
e.g.
=(51560*445)/1000/1000 shows as ~23MB
Also, don't expect your allocation to be exactly what you requested because the minimum allocation on Z/OS is 1 track or ~56k. The BLKSIZE also comes into effect by adding interblock gaps of ~32bytes per block. With SDB (system Determined Blocksize) invoked by omitting BLKSIZE or coding BLKSIZE=0, it will always try to provide half track blocking as close to 28k as possible so two blocks per track which is the most space efficient. That does matter, a BLKSIZE of 80bytes wastes ~80% of a track with interblock gaps. The BLKSIZE is also the unit of transfer when doing read/write to disk so generally the larger the better with some exceptions such as KSDS's being randomly access by key for example which might result in more data transfer than desired in an OLTP transaction.
I am trying to understand exactly how programming in Lua can change the state of I/O's with a Modbus I/O module. I have read the modbus protocol and understand the registers, coils, and how a read/write string should look. But right now, I am trying to grasp how I can manipulate the read/write bit(s) and how functions can perform these actions. I know I may be very vague right now, but hopefully the following functions, along with some questions throughout them, will help me better convey where I am having the disconnect. It has been a very long time since I've first learned about bit/byte manipulation.
local funcCodes = { --[[I understand this part]]
readCoil = 1,
readInput = 2,
readHoldingReg = 3,
readInputReg = 4,
writeCoil = 5,
presetSingleReg = 6,
writeMultipleCoils = 15,
presetMultipleReg = 16
}
local function toTwoByte(value)
return string.char(value / 255, value % 255) --[[why do both of these to the same value??]]
end
local function readInputs(s)
local s = mperia.net.connect(host, port)
s:set_timeout(0.1)
local req = string.char(0,0,0,0,0,6,unitId,2,0,0,0,6)
local req = toTwoByte(0) .. toTwoByte(0) .. toTwoByte(6) ..
string.char(unitId, funcCodes.readInput)..toTwoByte(0) ..toTwoByte(8)
s:write(req)
local res = s:read(10)
s:close()
if res:byte(10) then
local out = {}
for i = 1,8 do
local statusBit = bit32.rshift(res:byte(10), i - 1) --[[What is bit32.rshift actually doing to the string? and the same is true for the next line with bit32.band.
out[#out + 1] = bit32.band(statusBit, 1)
end
for i = 1,5 do
tDT.value["return_low"] = tostring(out[1])
tDT.value["return_high"] = tostring(out[2])
tDT.value["sensor1_on"] = tostring(out[3])
tDT.value["sensor2_on"] = tostring(out[4])
tDT.value["sensor3_on"] = tostring(out[5])
tDT.value["sensor4_on"] = tostring(out[6])
tDT.value["sensor5_on"] = tostring(out[7])
tDT.value[""] = tostring(out[8])
end
end
return tDT
end
If I need to be a more specific with my questions, I'll certainly try. But right now I'm having a hard time connecting the dots with what is actually going on to the bit/byte manipulation here. I've read both books on the bit32 library and sources online, but still don't know what these are really doing. I hope that with these examples, I can get some clarification.
Cheers!
--[[why do both of these to the same value??]]
There are two different values here: value / 255 and value % 255. The "/" operator represents divison, and the "%" operator represents (basically) taking the remainder of division.
Before proceeding, I'm going to point out that 255 here should almost certainly be 256, so let's make that correction before proceeding. The reason for this correction should become clear soon.
Let's look at an example.
value = 1000
print(value / 256) -- 3.90625
print(value % 256) -- 232
Whoops! There was another problem. string.char wants integers (in the range of 0 to 255 -- which has 256 distinct values counting 0), and we may be given it a non-integer. Let's fix that problem:
value = 1000
print(math.floor(value / 256)) -- 3
-- in Lua 5.3, you could also use value // 256 to mean the same thing
print(value % 256) -- 232
What have we done here? Let's look 1000 in binary. Since we are working with two-byte values, and each byte is 8 bits, I'll include 16 bits: 0b0000001111101000. (0b is a prefix that is sometimes used to indicate that the following number should be interpreted as binary.) If we split this into the first 8 bits and the second 8 bits, we get: 0b00000011 and 0b11101000. What are these numbers?
print(tonumber("00000011",2)) -- 3
print(tonumber("11101000",2)) -- 232
So what we have done is split a 2-byte number into two 1-byte numbers. So why does this work? Let's go back to base 10 for a moment. Suppose we have a four-digit number, say 1234, and we want to split it into two two-digit numbers. Well, the quotient 1234 / 100 is 12, and the remainder of that divison is 34. In Lua, that's:
print(math.floor(1234 / 100)) -- 12
print(1234 % 100) -- 34
Hopefully, you can understand what's happening in base 10 pretty well. (More math here is outside the scope of this answer.) Well, what about 256? 256 is 2 to the power of 8. And there are 8 bits in a byte. In binary, 256 is 0b100000000 -- it's a 1 followed by a bunch of zeros. That means it a similar ability to split binary numbers apart as 100 did in base 10.
Another thing to note here is the concept of endianness. Which should come first, the 3 or the 232? It turns out that different computers (and different protocols) have different answers for this question. I don't know what is correct in your case, you'll have to refer to your documentation. The way you are currently set up is called "big endian" because the big part of the number comes first.
--[[What is bit32.rshift actually doing to the string? and the same is true for the next line with bit32.band.]]
Let's look at this whole loop:
local out = {}
for i = 1,8 do
local statusBit = bit32.rshift(res:byte(10), i - 1)
out[#out + 1] = bit32.band(statusBit, 1)
end
And let's pick a concrete number for the sake of example, say, 0b01100111. First let's lookat the band (which is short for "bitwise and"). What does this mean? It means line up the two numbers and see where two 1's occur in the same place.
01100111
band 00000001
-------------
00000001
Notice first that I've put a bunch of 0's in front of the one. Preceeding zeros don't change the value of the number, but I want all 8 bits for both numbers so that I can check each digit (bit) of the first number with each digit of the second number. In each place where there both numbers had a 1 (the top number had a 1 "and" the bottom number had a 1), I put a 1 for the result, otherwise I put 0. That's bitwise and.
When we bitwise and with 0b00000001 as we did here, you should be able to see that we will only get a 1 (0b00000001) or a 0 (0b00000000) as the result. Which we get depends on the last bit of the other number. We have basically separated out the last bit of that number from the rest (which is often called "masking") and stored it in our out array.
Now what about the rshift ("right shift")? To shift right by one, we discard the rightmost digit, and move everything else over one space the the right. (At the left, we usually add a 0 so we still have 8 bits ... as usual, adding a bit in front of a number doesn't change it.)
right shift 01100111
\\\\\\\\
0110011 ... 1 <-- discarded
(Forgive my horrible ASCII art.) So shifting right by 1 changes our 0b01100111 to 0b00110011. (You can also think of this as chopping off the last bit.)
Now what does it mean to shift right be a different number? Well to shift by zero does not change the number. To shift by more than one, we just repeat this operation however many times we are shifting by. (To shift by two, shift by one twice, etc.) (If you prefer to think in terms of chopping, right shift by x is chopping off the last x bits.)
So on the first iteration through the loop, the number will not be shifted, and we will store the rightmost bit.
On the second iteration through the loop, the number will be shifted by 1, and the new rightmost bit will be what was previously the second from the right, so the bitwise and will mask out that bit and we will store it.
On the next iteration, we will shift by 2, so the rightmost bit will be the one that was originally third from the right, so the bitwise and will mask out that bit and store it.
On each iteration, we store the next bit.
Since we are working with a byte, there are only 8 bits, so after 8 iterations through the loop, we will have stored the value of each bit into our table. This is what the table should look like in our example:
out = {1,1,1,0,0,1,1,0}
Notice that the bits are reversed from how we wrote them 0b01100111 because we started looking from the right side of the binary number, but things are added to the table starting on the left.
In your case, it looks like each bit has a distinct meaning. For example, a 1 in the third bit could mean that sensor1 was on and a 0 in the third bit could mean that sensor1 was off. Eight different pieces of information like this were packed together to make it more efficient to transmit them over some channel. The loop separates them again into a form that is easy for you to use.
I have been checking out integer-gmp source code to understand how foreign primops can be implemented in terms of cmm as documented on GHC Primops page. I am aware of techniques to implement them using llvm hack or fvia-C/gcc - this is more of a learning experience for me to understand this third approach that interger-gmp library uses.
So, I looked up CMM tutorial on MSFT page (pdf link), went through GHC CMM page, and still there are some unanswered questions (hard to keep all those concepts in head without digging into CMM which is what I am doing now). There is this code fragment from integer-bmp cmm file:
integer_cmm_int2Integerzh (W_ val)
{
W_ s, p; /* to avoid aliasing */
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val);
p = Hp - SIZEOF_StgArrWords;
SET_HDR(p, stg_ARR_WORDS_info, CCCS);
StgArrWords_bytes(p) = SIZEOF_W;
/* mpz_set_si is inlined here, makes things simpler */
if (%lt(val,0)) {
s = -1;
Hp(0) = -val;
} else {
if (%gt(val,0)) {
s = 1;
Hp(0) = val;
} else {
s = 0;
}
}
/* returns (# size :: Int#,
data :: ByteArray#
#)
*/
return (s,p);
}
As defined in ghc cmm header:
W_ is alias for word.
ALLOC_PRIM_N is a function for allocating memory on the heap for primitive object.
Sp(n) and Hp(n) are defined as below (comments are mine):
#define WDS(n) ((n)*SIZEOF_W) //WDS(n) calculates n*sizeof(Word)
#define Sp(n) W_[Sp + WDS(n)]//Sp(n) points to Stackpointer + n word offset?
#define Hp(n) W_[Hp + WDS(n)]//Hp(n) points to Heap pointer + n word offset?
I don't understand lines 5-9 (line 1 is the start in case you have 1/0 confusion). More specifically:
Why is the function call format of ALLOC_PRIM_N (bytes,fun,arg) that way?
Why is p manipulated that way?
The function as I understand it (from looking at function signature in Prim.hs) takes an int, and returns a (int, byte array) (stored in s,p respectively in the code).
For anyone who is wondering about inline call in if block, it is cmm implementation of gmp mpz_init_si function. My guess is if you call a function defined in object file through ccall, it can't be inlined (which makes sense since it is object-code, not intermediate code - LLVM approach seems more suitable for inlining through LLVM IR). So, the optimization was to define a cmm representation of the function to be inlined. Please correct me if this guess is wrong.
Explanation of lines 5-9 will be very much appreciated. I have more questions about other macros defined in integer-gmp file, but it might be too much to ask in one post. If you can answer the question with a Haskell wiki page or a blog (you can post the link as answer), that would be much appreciated (and if you do, I would also appreciate step-by-step walk-through of an integer-gmp cmm macro such as GMP_TAKE2_RET1).
Those lines allocate a new ByteArray# on the Haskell heap, so to understand them you first need to know a bit about how GHC's heap is managed.
Each capability (= OS thread that executes Haskell code) has its own dedicated nursery, an area of the heap into which it makes normal, small allocations like this one. Objects are simply allocated sequentially into this area from low addresses to high addresses until the capability tries to make an allocation which exceeds the remaining space in the nursery, which triggers the garbage collector.
All heap objects are aligned to a multiple of the word size, i.e., 4 bytes on 32-bit systems and 8 bytes on 64-bit systems.
The Cmm-level register Hp points to (the beginning of) the last word which has been allocated in the nursery. HpLim points to the last word which can be allocated in the nursery. (HpLim can also be set to 0 by another thread to stop the world for GC, or to send an asynchronous exception.)
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/HeapObjects has information on the layout of individual heap objects. Notably each heap object begins with an info pointer, which (among other things) identifies what sort of heap object it is.
The Haskell type ByteArray# is implemented with the heap object type ARR_WORDS. An ARR_WORDS object just consists of (an info pointer followed by) a size (in bytes) followed by arbitrary data (the payload). The payload is not interpreted by the GC, so it can't store pointers to Haskell heap objects, but it can store anything else. SIZEOF_StgArrWords is the size of the header common to all ARR_WORDS heap objects, and in this case the payload is just a single word, so SIZEOF_StgArrWords + WDS(1) is the amount of space we need to allocate.
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val) expands to something like
Hp = Hp + (SIZEOF_StgArrWords + WDS(1));
if (Hp > HpLim) {
HpAlloc = SIZEOF_StgArrWords + WDS(1);
goto stg_gc_prim_n(integer_cmm_int2Integerzh, val);
}
First line increases Hp by the amount to be allocated. Second line checks for heap overflow. Third line records the amount that we tried to allocate, so the GC can undo it. The fourth line calls the GC.
The fourth line is the most interesting. The arguments tell the GC how to restart the thread once garbage collection is done: it should reinvoke integer_cmm_int2Integerzh with argument val. The "_n" in stg_gc_prim_n (and the "_N" in ALLOC_PRIM_N) means that val is a non-pointer argument (in this case an Int#). If val were a pointer to a Haskell heap object, the GC needs to know that it is live (so it doesn't get collected) and to reinvoke our function with the new address of the object. In that case we'd use the _p variant. There are also variants like _pp for multiple pointer arguments, _d for Double# arguments, etc.
After line 5, we've successfully allocated a block of SIZEOF_StgArrWords + WDS(1) bytes and, remember, Hp points to its last word. So, p = Hp - SIZEOF_StgArrWords sets p to the beginning of this block. Lines 8 fills in the info pointer of p, identifying the newly-created heap object as ARR_WORDS. CCCS is the current cost-center stack, used only for profiling. When profiling is enabled each heap object contains an extra field that basically identifies who is responsible for its allocation. In non-profiling builds, there is no CCCS and SET_HDR just sets the info pointer. Finally, line 9 fills in the size field of the ByteArray#. The rest of the function fills in the payload and return the sign value and the ByteArray# object pointer.
So, this ended up being more about the GHC heap than about the Cmm language, but I hope it helps.
Required knowledge
In order to do arithmetic and logical operations computers have digital circuit called ALU (Arithmetic Logic Unit) in their CPU (Central Processing Unit). An ALU loads data from input registers. Processor register is memory storage in L1 cache (data requests within 3 CPU clock ticks) implemented in SRAM(Static Random-Access Memory) located in CPU chip. A processor often contains several kinds of registers, usually differentiated by the number of bits they can hold.
Numbers are expressed in discrete bits can hold finite number of values. Typically numbers have following primitive types exposed by the programming language (in Haskell):
8 bit numbers = 256 unique representable values
16 bit numbers = 65 536 unique representable values
32 bit numbers = 4 294 967 296 unique representable values
64 bit numbers = 18 446 744 073 709 551 616 unique representable values
Fixed-precision arithmetic for those types has been implemented in hardware. Word size refers to the number of bits that can be processed by a computer's CPU in one go. For x86 architecture this is 32 bits and x64 this is 64 bits.
IEEE 754 defines floating point numbers standard for {16, 32, 64, 128} bit numbers. For example 32 bit point number (with 4 294 967 296 unique values) can hold approximate values [-3.402823e38 to 3.402823e38] with accuracy of at least 7 floating point digits.
In addition
Acronym GMP means GNU Multiple Precision Arithmetic Library and adds support for software emulated arbitrary-precision arithmetic's. Glasgow Haskell Compiler Integer implementation uses this.
GMP aims to be faster than any other bignum library for all operand
sizes. Some important factors in doing this are:
Using full words as the basic arithmetic type.
Using different algorithms for different operand sizes; algorithms that are faster for very big numbers are usually slower for small
numbers.
Highly optimized assembly language code for the most important inner loops, specialized for different processors.
Answer
For some Haskell might have slightly hard to comprehend syntax so here is javascript version
var integer_cmm_int2Integerzh = function(word) {
return WORDSIZE == 32
? goog.math.Integer.fromInt(word))
: goog.math.Integer.fromBits([word.getLowBits(), word.getHighBits()]);
};
Where goog is Google Closure library class used is located in Math.Integer. Called functions :
goog.math.Integer.fromInt = function(value) {
if (-128 <= value && value < 128) {
var cachedObj = goog.math.Integer.IntCache_[value];
if (cachedObj) {
return cachedObj;
}
}
var obj = new goog.math.Integer([value | 0], value < 0 ? -1 : 0);
if (-128 <= value && value < 128) {
goog.math.Integer.IntCache_[value] = obj;
}
return obj;
};
goog.math.Integer.fromBits = function(bits) {
var high = bits[bits.length - 1];
return new goog.math.Integer(bits, high & (1 << 31) ? -1 : 0);
};
That is not totally correct as return type should be return (s,p); where
s is value
p is sign
In order to fix this GMP wrapper should be created. This has been done in Haskell to JavaScript compiler project (source link).
Lines 5-9
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val);
p = Hp - SIZEOF_StgArrWords;
SET_HDR(p, stg_ARR_WORDS_info, CCCS);
StgArrWords_bytes(p) = SIZEOF_W;
Are as follows
allocates space as new word
creates pointer to it
set pointer value
set pointer type size
How can I determine if the following memory access is coalesced or not:
// Thread-ID
int idx = blockIdx.x * blockDim.x + threadIdx.x;
// Offset:
int offset = gridDim.x * blockDim.x;
while ( idx < NUMELEMENTS )
{
// Do Something
// ....
// Write to Array which contains results of calculations
results[ idx ] = df2;
// Next Element
idx += offset;
}
NUMELEMENTS is the complete number of single dataelements to process. The array results is passed as pointer to the kernel function and allocated before in global memory.
My Question: Is the write access in the line results[ idx ] = df2; coalesced?
I believe it is as each thread processes consecutive indexed items but I'm not completely sure about it & I don't know how to tell.
Thanks!
Depends if the length of the lines of your matrix is a multiple of half the warp size for devices of compute capability 1.x or a multiple of the warp size for devices of compute capability 2.x. If it is not you can use padding to make it fully coalesced. The function cudaMallocPitch can be used for this purpose.
edit:
Sorry for the confusion. You write 'offset' elements at a time which I interpreted as lines of a matrix.
What I mean is, after each iteration of your cycle you increase the idx by offset. If offset is a multiple of half the warp size for devices of compute capability 1.x or a multiple of the warp size for devices of compute capability 2.x then you it is coalesced, if not then you need padding to make it so.
Probably it is already coalesced because you should choose the number of threads per block and thus the blockDim as a multiple of the warp size.