Paging Replacement Algorithm - worst case reference string - page-replacement

I have a question regarding page replacement algorithms and their worst case reference strings. I want to find a single 20 elements long reference strings that has a worst case performance on all of these scheduling policies.
The scheduling policies: FIFO, LRU, CLOCK and OPT (all with cache size 4).
example reference string: 0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4

Related

what is the max size of a string type in terraform

I'm trying to locate a definitive answer to, "What is the max size of a Terraform value type of 'string'"?
Been searching and googling and can't seem to find it defined anywhere. Anyone have any reference they could point me to?
Tia
Bill W
The length of strings in Terraform is constrained in two main ways:
Terraform internally tracks the length of the string, which is stored as an integer type which has a limited range.
Strings need to exist in system memory as a consecutive sequence of bytes.
The first of these is directly answerable: Terraform tracks the length of a string using an integer type large enough to represent all pointers on the host platform. From a practical standpoint then, that means a 64-bit integer when you're using a 64-bit build, and a 32-bit number when you're using a 32-bit build.
That means that there's a hard upper limit imposed by the maximum value of that integer. Terraform is internally tracking the length of the UTF-8 representation of the string in bytes, and so this upper limit is measured in bytes rather than in characters:
32-bit systems: 4,294,967,295 bytes
64-bit systems: 18,446,744,073,709,551,615 bytes
Terraform stores strings in memory using Unicode NFC normal form, UTF-8 encoded, and so the number of characters will vary depending on how many bytes each character takes up in the UTF-8 encoding form. ASCII characters take only one byte, but other characters can require up to four bytes.
A string of the maximum representable length would take up the entire address space of the Terraform process, which is impossible (there needs to be room for the Terraform application code, libraries, and kernel space too!), and so in practice the available memory on your system is the more relevant limit. That limit varies based on various characteristics of the system where you're running Terraform, and so isn't answerable in a general sense.

Why is string manipulation more expensive?

I've heard this so many times, that I have taken it for granted. But thinking back on it, can someone help me realize why string manipulation, say comparison etc, is more expensive than say an integer, or some other primitive?
8bit example:
1 bit can be 1 or 0. With 2 bits you can represent 0, 1, 2, and 3. And so on.
With a byte you have 2^8 possibilities, from 0 to 255.
In a string a single letter is stored in a byte, so "Hello world" is 11 bytes.
If I want to do 100 + 100, 100 is stored in 1 byte of memory, I need only two bytes to sum two numbers. The result will need again 1 byte.
Now let's try with strings, "100" + "100", this is 3 bytes plus 3 bytes and the result, "100100" needs 6 bytes to be stored.
This is over-simplified, but more or less it works in this way.
The int data type in C# was carefully selected to be a good match with processor design. Which can store an int in a cpu register, a storage location that's an easy factor of 3 faster than memory. And a single cpu instruction to compare values of type int. The CMP instruction runs in less than a single cpu cycle, a fraction of a nano-second.
That doesn't work nearly as well for a string, it is a variable length data type and every single char in the string must be compared to test for equality. So it is automatically proportionally slower by the size of the string. Furthermore, string comparison is afflicted by culture dependent comparison rules. The kind that make "ss" and "ß" equal in German and "Aa" and "Å" equal in Danish. Nothing subtle to deal with, taken care of by highly optimized table-driven code inside the CLR. It can't beat CMP.
I've always thought it was because of the immutability of strings. That is, every time you make a change to the string, it requires allocating memory for a whole new string (rather than modifying the original in place).
Probably a woefully naive understanding but perhaps someone else can expound further.
There are several things to consider when looking at the "cost" of manipulating strings.
There is the cost in terms of memory usage, there is the cost in terms of CPU cycles used, and there is a cost associated with the complexity of the code involved.
Integer manipulation (Add, Subtract, Multipy, Divide, Compare) is most often done by the CPU at the hardware level, in few (or even 1) instruction. When the manipulation is done, the answer fits back in the same size chunk of memory.
Strings are stored in blocks of memory, which have to be manipulated a byte or word at a time. Comparing two 100 character long strings may require 100 separate comparison operations.
Any manipulation that makes a string longer will require, either moving the string to a bigger block of memory, or moving other stuff around in memory to allow growing the existing block.
Any manipulation that leaves the string the same, or smaller, could be done in place, if the language allows for it. If not, then again, a new block of memory has to be allocated and contents moved.

Atomic reads and writes of properties

This question has been asked before but i still don't understand it fully so here it goes.
If i have a class with a property (a non-nullable double or int) - can i read and write the property with multiple threads?
I have read somewhere that since doubles are 64 bytes, it is possible to read a double property on one thread while it is being written on a different one. This will result in the reading thread returning a value that is neither the original value nor the new written value.
When could this happen? Is it possible with ints as well? Does it happen with both 64 and 32 bit applications?
I haven't been able to replicate this situation in a console
If i have a class with a property (a non-nullable double or int) - can i read and write the property with multiple theads?
I assume you mean "without any synchronization".
double and long are both 64 bits (8 bytes) in size, and are not guaranteed to be written atomically. So if you were moving from a value with byte pattern ABCD EFGH to a value with byte pattern MNOP QRST, you could potentially end up seeing (from a different thread) ABCD QRST or MNOP EFGH.
With properly aligned values of size 32 bits or lower, atomicity is guaranteed. (I don't remember seeing any guarantees that values will be properly aligned, but I believe they are by default unless you force a particular layout via attributes.) The C# 4 spec doesn't even mention alignment in section 5.5 which deals with atomicity:
Reads and writes of the following data types are atomic: bool, char, byte, sbyte, short, ushort, uint, int, float, and reference types. In addition, reads and writes of enum types with an underlying type in the previous list are also atomic. Reads and writes of other types, including long, ulong, double, and decimal, as well as user-defined types, are not guaranteed to be atomic. Aside from the library functions designed for that purpose, there is no guarantee of atomic read-modify-write, such as in the case of increment or decrement.
Additionally, atomicity isn't the same as volatility - so without any extra care being taken, a read from one thread may not "see" a write from a different thread.
These operations are not atomic, that's why the Interlocked class exists in the first place, with methods like Increment(Int32) and Increment(Int64).
To ensure thread safety, you should use at least this class, if not more complex locking (with ReaderWriterLockSlim, for example, in case you want to synchronize access to groups of properties).

What value for length field for Freescale PowerPC Security Engine 2.0, when using Link Tables?

I am working on the code to use the security engine of my MPC83XX with Openssl.
I can already encrypt/decrypt AES up to 64KByte of data.
The problem comes with data greater than 64KByte since the maximum value of the length-bits is 65535.
I can assume the data is always in one piece on the Ram.
So now I am collecting all the data in a Link Table and use the pointer to the table instead of the pointer to the data and set the J bit to 1.
Now I am not sure what a value I should use for the length-bits since 0 would mean the Dword will be ignored.
The real length of the data is too also big for 16 bit.
http://cache.freescale.com/files/32bit/doc/app_note/AN2755.pdf?fpsp=1
Possible Informations can be found in Chapter 8.
You set LENGTH to the length of the data. See Page 19:
For any sequence of data parcels accessed by a link table or chain of link tables, the combined lengths of the parcels (the sum of their LENGTH and/or EXTENT fields) must equal the combined lengths of the link table memory segments (SEGLEN fields). Otherwise the channel sets the appropriate error bit in the Channel Pointer Status Register...
I'm not sure what mode you're using (and the documentation seems unnecessarily confusing!) but for the usual cipher modes (CBC/CTR/CFB/OFB) the the usual method is simply to chain AES invocations, reusing the same context. You might be able to do this by simply setting "Pointer Dword1" and "Pointer Dword5" to the same thing. There's very little documentation, though; I can't work out where it gets the IV from.

How should one use Disruptor (Disruptor Pattern) to build real-world message systems?

As the RingBuffer up-front allocates objects of a given type, how can you use a single ring buffer to process messages of various different types?
You can't create new object instances to insert into the ringBuffer and that would defeat the purpose of up-front allocation.
So you could have 3 messages in an async messaging pattern:
NewOrderRequest
NewOrderCreated
NewOrderRejected
So my question is how are you meant to use the Disruptor pattern for real-world messageing systems?
Thanks
Links:
http://code.google.com/p/disruptor-net/wiki/CodeExamples
http://code.google.com/p/disruptor-net
http://code.google.com/p/disruptor
One approach (our most common pattern) is to store the message in its marshalled form, i.e. as a byte array. For incoming requests e.g. Fix messages, binary message, are quickly pulled of the network and placed in the ring buffer. The unmarshalling and dispatch of different types of messages are handled by EventProcessors (Consumers) on that ring buffer. For outbound requests, the message is serialised into the preallocated byte array that forms the entry in the ring buffer.
If you are using some fixed size byte array as the preallocated entry, some additional logic is required to handle overflow for larger messages. I.e. pick a reasonable default size and if it is exceeded allocate a temporary array that is bigger. Then discard it when the entry is reused or consumed (depending on your use case) reverting back to the original preallocated byte array.
If you have different consumers for different message types you could quickly identify if your consumer is interested in the specific message either by knowing an offset into the byte array that carries the type information or by passing a discriminator value through on the entry.
Also there is no rule against creating object instances and passing references (we do this in a couple of places too). You do lose the benefits of object preallocation, however one of the design goals of the disruptor was to allow the user the choice of the most appropriate form of storage.
There is a library called Javolution (http://javolution.org/) that let's you defined objects as structs with fixed-length fields like string[40] etc. that rely on byte-buffers internally instead of variable size objects... that allows the token ring to be initialized with fixed size objects and thus (hopefully) contiguous blocks of memory that allow the cache to work more efficiently.
We are using that for passing events / messages and use standard strings etc. for our business-logic.
Back to object pools.
The following is an hypothesis.
If you will have 3 types of messages (A,B,C), you can make 3 arrays of those pre-allocated. That will create 3 memory zones A, B, C.
It's not like there is only one cache line, there are many and they don't have to be contiguous. Some cache lines will refer to something in zone A, other B and other C.
So the ring buffer entry can have 1 reference to a common ancestor or interface of A & B & C.
The problem is to select the instance in the pools; the simplest is to have the same array length as the ring buffer length. This implies a lot of wasted pooled objects since only one of the 3 is ever used at any entry, ex: ring buffer entry 1234 might be using message B[1234] but A[1234] and C[1234] are not used and unusable by anyone.
You could also make a super-entry with all 3 A+B+C instance inlined and indicate the type with some byte or enum. Just as wasteful on memory size, but looks a bit worse because of the fatness of the entry. For example a reader only working on C messages will have less cache locality.
I hope I'm not too wrong with this hypothesis.

Resources