A page replacement algorithm should minimize the number of page faults - page-fault

I'm currently reading about Page Replacement Algorithms and I find complex question for me.
Question is:
Page replacement algorithm should minimize the number of page faults.
Description:
We can achieve this minimization by distributing heavily used pages evenly over all of memory, rather than having them compete for a small number of page frames. We can associate with each page frame a counter of the number of pages associated with that frame. Then, to replace a page, we can search for the page frame with the smallest counter.
b) How many page faults occur for your algorithm for the following reference string with four page frames?
1, 2, 3, 4, 5, 3, 4, 1, 6, 7, 8, 7, 8, 9, 7, 8, 9, 5, 4, 5, 4, 2

Related

OpenAI Gym save_video getting memory errors

I'm running the LunarLander-v2 gym environment and have successfully trained a policy using PPO. I saw the gym API that there is a function to save videos to a file. I need to do this as I am running my code on a server and there isn't a GUI. I've looked into the source code of save_video as well and notice that it saves videos in a "cubic" fashion. Videos of episode number [0, 1, 4, 27, 64, etc.] will be saved. I followed the example in the API but excluded step_starting_index as I don't think I need that. My code is below:
frames = env.render()
save_video(
frames,
"videos",
fps=240,
episode_index=n
)
Episode 0 and 1 successfully gets saved, but after that (Episode 2, 3 or 4), the Python process gets Killed, which suggests a memory error. If I comment out save_video and let it run, the evaluations up to Episode 10 and beyond all succeed. This suggests that there seems to be a problem with save_video and I can't identify it.

Why free GPU is not utilisable?

I run a deep learning program in PyTorch using nn.DataParallel. Since I have eight GPUs available. I am passing device_ids=[0, 1, 2, 3, 4, 5, 6, 7].
My program runs on the first seven GPUs [0, 1, 2, 3, 4, 5, 6] but not on the last GPU, whose index is 7.
I have no clue on the reason for the same. What can be the possible reason for the non-utilization of the GPU even though it is free to use?
I found the solution from this
I am using the batch size of 7. So it is using only 7 GPUs, and if I change it to eight, it is using all the GPUs.

How to configure a list of CPUs using chcpu?

There is command chcpu. I know how to use this with one CPU number. How to use this with list or set of CPU numbers ?
chcpu -e cpu-list
How to write this cpu-list ?
From man chcpu
Some options have a cpu-list argument. Use this argument to specify a comma-separated list of CPUs. The list can contain indi‐
vidual CPU addresses or ranges of addresses. For example, 0,5,7,9-11 makes the command applicable to the CPUs with the addresses
0, 5, 7, 9, 10, and 11.

What's an alternative to os.loadavg() for Windows

Since os.loadavg() returns [0, 0, 0], is there any way of getting the CPU's average load in Windows based systems for 1, 5, and 15 minute intervals without having to check every few seconds and save the result yourself?
Digging out this topic.
Use loadavg-windows from npm to enable os.loadavg() on Windows.
Wait few seconds to see first result different than 0.
You can use the windows-cpu npm package to get similar results.

YSlow: Incorrect number of HTTP requests?

A page I'm looking at optimising has around 83-87 HTTP requests as measured by Chrome dev tools and WebPageTest (the exact number is slightly variable depending on affiliate libraries).
However, the YSlow Chrome extension claims there are only 51 requests. Likewise, YSlow run from ShowSlow is showing 60 requests.
The difference between YSlow measures aside, it does look like YSlow is measuring the number of HTTP requests incorrectly, and thus my faith in the recommendations and grade is not good.
The page in question does load some components post-onload (which YSlow doesn't measure), but there are only 10 components loaded post-load (which doesn't account for the 20-30 anomaly with other tools).
Anyone know why this might be happening, or indeed provide some suggestions on how to debug or diagnose?
I took a look at the link you suggested (bally.co.uk) to compare YSlow with WebPageTest. YSlow reported 56 components and WebPageTest 76. Here's the breakout:
Doc/html: yslow 1, wpt 3, diff: 2 0-byte files
Javascript: yslow 37, wpt 39, diff: 2 0-byte files
CSS: yslow 5, wpt 5
Images: yslow 12, wpt 19, diff: 7 1x1 beacon gifs
Favicon: yslow 1, wpt 1
JSON: yslow 0, wpt 7, diff: 7 dynamically loaded
Font: yslow 0, wpt 2, diff: 2 dynamically loaded
My conclusion goes back to the link you provided to the YSlow FAQ. The differences all seem to be dynamic requests that are either 0-byte or very small (like the 1x1 gifs). I think it's due to the combined DOM and network sniffing approach that YSlow takes.
Also if I compare the total size loaded for the first view, they are very close to each other:
YSlow: 1,683 KB
WebPageTest: 1,711 KB

Resources