I want to create an empty list of lists:
shape = (70000,70000)
corr = np.empty(shape).tolist()
How can I know how much RAM I need to hold this list using windows operating system (64 bit)?
This will create a list-of-lists-of-floats. About half of the RAM used is for the floats themselves and half is for the references to them. The size of each reference is 8 bytes and the size of each float is also 8 bytes. That makes 70000 * 70000 * 8 * 2 bytes (approx 80G).
Lists look like this in memory:
image source: here
The 70001 lists objects themselves also have overhead (they maintain pointer into storage array, and their own length), but this will be negligible in comparison (probably ~4 MB).
Also note that Python lists overallocate space by an implementation-dependent factor, so consider these numbers a lower bound. Memory is over-allocated so that there are always some free slots available, which makes appends and inserts faster. The space allocated increased by about 12.5% when full.
If I don't know the size of cache and have a threaded program,
I can obtain GFLOPS by running the program with increasing the number of threads.
How can I estimate the size of cache?
You can write a byte buffer of size N, then perform a lot of random reads on it (let say 20*N and finally measure the resulting time of the random reads. The performance of the random operation is directly dependent of N and the cache size: when data fit in cache, the reads are much faster because data can be retrieved locally. You should divide the time by the number of reads to know approximately the amortized latency per read.
Because most processors have multiple levels of cache, the result you should get is something like that:
You can see on the plot the impact of the different level of the memory hierarchy (L1, L2, L3 and the RAM). The performance gaps let you know the cache sizes and the number of cache in the hierarchy. For example, in the above plot, the L1 cache is likely of size 32KB. If you want to do it automatically, you can just locate the local maximum of the derivative function. If you want something more accurate, you could use some basic machine-learning methods.
I know that reading a file with chunk size that is multiple of filesystem block size is better.
1) Why is that the case? I mean lets say block size is 8kb and I read 9kb. This means that it has to go and get 12kb and then get rid of the other extra 3kb.
Yes it did go and do some extra work but does that make much of a difference unless your block size is really huge?
I mean yes if I am reading 1tb file than this definitely makes a difference.
The other reason I can think of is that the block size refers to a group of sectors on hard disk (please correct me). So it could be pointing to 8 or 16 or 32 or just one sector. so your hard disk would have to do more work essentially if the block points to a lot more sectors? am I right?
2) So lets say block size is 8kb. Do I now read 16kb at a time? 1mb? 1gb? what should I use as a chunk size?
I know available memory is a limitation but apart from that what other factors affect my choice?
Thanks a lot in advance for all the answers.
Theorically, the fastest I/O could occur when the buffer is
page-aligned, and when its size is a multiple of the system block
size.
If the file was stored continuously on the hard disk, the fastest I/O
throughput would be attained by reading cylinder by cylinder. (There
could even not be any latency then, since when you read a whole track
you don't need to start from the start, you can start in the middle,
and loop over). Unfortunately nowadays it would be near impossible to
do that, since the hard disk firmware hides the physical layout of the
sectors, and may use replacement sectors needing even seeks while
reading a single track. The OS file system may also try to spread the
file blocks all over the disk (or at least, all over a cylinder
group), to avoid having to do long seeks over big files when
acccessing small files.
So instead of considering physical tracks, you may try to take into
account the hard disk buffer size. Most hard disks have buffer size of
8 MB, some 16 MB. So reading the file by chunks of up to 1 MB or 2 MB
should let the hard disk firmware optimize the throughput without
stalling it's buffer.
But then, if there are a lot of layers above, eg, a RAID, all bets are
off.
Really, the best you can do is to benchmark your particular
circumstances.
Is there a way to modify or increase the size value that is passed to
the readdir/readdirplus functions?
My implementation uses the low-level API.
With directories that are rather complex, deeply nested, or contain a
large amount of sub-directories, I experience a performance impact from
what seems to be due to the amount of recurring calls to
readdir/readdirplus. It seems a buffer larger than 4096 bytes (which is
what is passed in now) would help tremendously.
I've modified max_read, max_readahead, and max_write values but have not
seen this have any effect.
Thank you in advance.
From the guide understanding linux kernel 3rd edition, chapter 8.2.10, Slab coloring-
We know from Chapter 2 that the same hardware cache line maps many different blocks of RAM. In this
chapter, we have also seen that objects of the same size end up being stored at the same offset within a cache.
Objects that have the same offset within different slabs will, with a relatively high probability, end up mapped
in the same cache line. The cache hardware might therefore waste memory cycles transferring two objects
from the same cache line back and forth to different RAM locations, while other cache lines go underutilized.
The slab allocator tries to reduce this unpleasant cache behavior by a policy called slab coloring : different
arbitrary values called colors are assigned to the slabs.
(1) I am unable to understand the issue that the slab coloring tries to solve. When a normal proccess accesses data, if it is not in the cache and a cache miss is encountered, the data is fetched into the cache along with data from the surounding address of the data the process tries to access to boost performance. How can a situation occur such that same specific cache lines keeps getting swapped? the probability that a process keeps accessing two different data addresses in same offset inside a memory area of two different memory areas is very low. And even if it does happen, cache policies usually choose lines to be swapped according to some agenda such as LRU, Random, etc. No policy exist such that chooses to evict lines according to a match in the least significant bits of the addresses being accessed.
(2) I am unable to understand how the slab coloring, which takes free bytes from end of slab to the beginning and results with different slabs with different offsets for the first objects, solve the cache-swapping issue?
[SOLVED] after a small investigation I believe I found an answer to my question. Answer been posted.
After many studying and thinking, I have got explanation seemingly more reasonable, not only by specific address examples.
Firstly, you must learn basics knowledge such as cache , tag, sets , line allocation.
It is certain that colour_off's unit is cache_line_size from linux kernel code. colour_off is the basic offset unit and colour is the number of colour_off in struct kmem_cache.
int __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
cachep->align = ralign;
cachep->colour_off = cache_line_size(); // colour_off's unit is cache_line_size
/* Offset must be a multiple of the alignment. */
if (cachep->colour_off < cachep->align)
cachep->colour_off = cachep->align;
.....
err = setup_cpu_cache(cachep, gfp);
https://elixir.bootlin.com/linux/v4.6/source/mm/slab.c#L2056
So we can analyse it in two cases.
The first is cache > slab.
You see slab 1 slab2 slab3 ... has no possibility to collide mostly because cache is big enough except slab1 vs slab5 which can collide. So colouring mechanism is not so clear to improve performance in the case. But with slab1 and slab5 we just ignore to explain it why, I am sure you will work it out after reading the following.
The second is slab > cache.
A blank line means a color_off or cache line. Clearly, slab1 and slab2 has no possibility to collide on the lines signed by tick as well as slab2 slab3.
We make sure colouring mechanism optimize two lines between two adjacent slabs, much less slab1 vs slab3 which optimize more lines, 2+2 = 4 lines, you can count it.
To summarize, colouring mechanism optimize cache performance (detailly just optimize some lines of colour_off at the beginning and end, not other lines which can still collide ) by using originally useless memory as possible as it can.
I think I got it, the answer is related to Associativity.
A cache can be divided to certain sets, each set can only cache a certain memory blocks type in it. For example, set0 will contain memory blocks with addresses of multiple of 8, set1 will contain memory blocks with addresses of multiple of 12. The reason for that is to boost cache performance, to avoid the situation where every address is searched throught the whole cache. This way only a certain set of the cache needs to be searched.
Now, from the link Understanding CPU Caching and performance
From page 377 of Henessey and Patterson, the cache placement formula is as follows:
(Block address) MOD (Number of sets in cache)
Lets take memory block address 0x10000008 (from slabX with color C) and memory block address 0x20000009 (from slabY with color Z). For most N (number of sets in cache), the calculation for <address> MOD <N> will yield a different value, hence a different set to cache the data. If the addresses were with same least significant bits values (for example 0x10000008 and 0x20000008) then for most of N the calculation will yield same value, hence the blocks will collide to the same cache set.
So, by keeping an a different offset (colors) for the objects in different slabs, the slabs objects will potentially reach different sets in cache and will not collide to the same set, and overall cache performance is increased.
EDIT: Furthermore, if the cache is a direct mapped one, then according to wikipedia, CPU Cache, no cache replacement policy exist and the modulu calculation yields the cache block to which the memory block will be stored:
Direct-mapped cache
In this cache organization, each location in main memory can go in only one entry in the cache. Therefore, a direct-mapped cache can also be called a "one-way set associative" cache. It does not have a replacement policy as such, since there is no choice of which cache entry's contents to evict. This means that if two locations map to the same entry, they may continually knock each other out. Although simpler, a direct-mapped cache needs to be much larger than an associative one to give comparable performance, and it is more unpredictable. Let x be block number in cache, y be block number of memory, and nbe number of blocks in cache, then mapping is done with the help of the equation x = y mod n.
Say you have a 256 KB cache and it uses a super-simple algorithm where it does cache line = (real address AND 0x3FFFFF).
Now if you have slabs starting on each megabyte boundary then item 20 in Slab 1 will kick Item 20 of Slab 2 out of cache because they use the same cache line tag.
By offsetting the slabs it becomes less likely that different slabs will share the same cache line tag. If Slab 1 and Slab 2 both hold 32 byte objects and Slab 2 is offset 8 bytes, its cache tags will never be exactly equal to Slab 1's.
I'm sure I have some details wrong, but take it for what it's worth.