What's an alternative to os.loadavg() for Windows - node.js

Since os.loadavg() returns [0, 0, 0], is there any way of getting the CPU's average load in Windows based systems for 1, 5, and 15 minute intervals without having to check every few seconds and save the result yourself?

Digging out this topic.
Use loadavg-windows from npm to enable os.loadavg() on Windows.
Wait few seconds to see first result different than 0.

You can use the windows-cpu npm package to get similar results.

Related

OpenAI Gym save_video getting memory errors

I'm running the LunarLander-v2 gym environment and have successfully trained a policy using PPO. I saw the gym API that there is a function to save videos to a file. I need to do this as I am running my code on a server and there isn't a GUI. I've looked into the source code of save_video as well and notice that it saves videos in a "cubic" fashion. Videos of episode number [0, 1, 4, 27, 64, etc.] will be saved. I followed the example in the API but excluded step_starting_index as I don't think I need that. My code is below:
frames = env.render()
save_video(
frames,
"videos",
fps=240,
episode_index=n
)
Episode 0 and 1 successfully gets saved, but after that (Episode 2, 3 or 4), the Python process gets Killed, which suggests a memory error. If I comment out save_video and let it run, the evaluations up to Episode 10 and beyond all succeed. This suggests that there seems to be a problem with save_video and I can't identify it.

Why free GPU is not utilisable?

I run a deep learning program in PyTorch using nn.DataParallel. Since I have eight GPUs available. I am passing device_ids=[0, 1, 2, 3, 4, 5, 6, 7].
My program runs on the first seven GPUs [0, 1, 2, 3, 4, 5, 6] but not on the last GPU, whose index is 7.
I have no clue on the reason for the same. What can be the possible reason for the non-utilization of the GPU even though it is free to use?
I found the solution from this
I am using the batch size of 7. So it is using only 7 GPUs, and if I change it to eight, it is using all the GPUs.

Mediainfo.dll duration parameter doesn't return seconds

I am working on a Visual C++ project, and I need to get duration of movie from a chosen file. I use Mediainfo.dll to retrieve this information (movieFile->General->DurationString;). The problem is when duration is more then one hour, I don't get seconds, i.e. seconds are always displayed as 00. When duration is less then one hour, everything is fine. I had also tried with movieFile->General->DurationMillis;, which returns duration in miliseconds, but I also get 00 seconds. Does anyone knows what might be the problem?
I don't know which intermediate layer you use, but from MediaInfo, MediaInfo::Get(Stream_General, 0, "Duration") returns a value in milliseconds for sure.
MediaInfo::Get(Stream_General, 0, "Duration/String3") will return duration in "HH:MM:SS.mmm" format.
Jérôme, developer of MediaInfo

Joblib parallel increases time by n jobs

While trying to get multiprocessing to work (and understand it) in python 3.3 I quickly reverted to joblib to make my life easier. But I experience something very strange (in my point of view). When running this code (just to test if it works):
Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(200000))
It takes about 9 seconds but by increasing n_jobs it actually takes longer... for n_jobs=2 it takes 25 seconds and n_jobs=4 it takes 27 seconds.
Correct me if I'm wrong... but shouldn't it instead be much faster if n_jobs increases? I have an Intel I7 3770K so I guess it's not the problem of my CPU.
Perhaps giving my original problem can increase the possibility of an answer or solution.
I have a list of 30k+ strings, data, and I need to do something with each string (independent of the other strings), it takes about 14 seconds. This is only the test case to see if my code works. In real applications it will probably be 100k+ entries so multiprocessing is needed since this is only a small part of the entire calculation.
This is what needs to be done in this part of the calculation:
data_syno = []
for entry in data:
w = wordnet.synsets(entry)
if len(w)>0: data_syno.append(w[0].lemma_names[0])
else: data_syno.append(entry)
The n_jobs parameter is counter intuitive as the max number of cores to be used is at -1. at 1 it uses only one core. At -2 it uses max-1 cores, at -3 it uses max-2 cores, etc. Thats how I read it:
from the docs:
n_jobs: int :
The number of jobs to use for the computation. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used.

Watir Measure Page Performance

Ive found this gem : http://watirwebdriver.com/page-performance/
But i cant seem to understand what this measures
browser.performance.summary[:response_time]/1000
Does it start measuring from the second i open the browser?
Watir::Browser.new :chrome
or from the last Watir-webdriver command writen?
And how can i set when it starts the timer?
** I've tried several scripts but i keep getting 0 seconds
Thats why im not sure.**
From what I have read (I have not actually used it on a project), the response_time is the time from starting navigation to the end of the page loading - see Tim's (the gem's author) answer in a previous question. The graphical image on Tim's blog will helps to understand the different values - http://90kts.com/2011/04/19/watir-webdriver-performance-gem-released/.
The gem is for getting performance results of single response, rather than overall usage of a browser during a script. So there is no need to start/stop the timer.
If you are getting 0 seconds, it likely means that the response_time is less than 1000 milliseconds (ie in Ruby, doing 999/1000 gives 0). To make sure you are getting something non-zero, try doing:
browser.performance.summary[:response_time]/1000.0
Dividing by 1000.0 will ensure that you get the decimal values (eg 0.013).

Resources