how to get the cpu's currentClockSpeed, minClockSpeed, maxClockSped - c#-4.0

Can i get the cpu's currentClockSpeed, minClockSpeed, maxClockSped.
I have use wmi to do this,but i think the result is not good.
There are no minClockSpeed,and the currentClockSpeed and maxClockSped have the same value,
The currentClockSpeed never changed.My pc's cpu is I5-2300.it support the turbo boost.
Any help will be appreciated.

Up,i can not get the minClockSpped value.
the currentClockSpeed and the maxclockSpeed have the same value in destop pc.

Related

Does recursion in Excel give a wrong result on AMD processors?

I try to use recursion in Excel, and just have changed my laptop from Intel to AMD, and saw a very strange issue.
Enable iterating computing with the maximum limit of 100 iterations
Try to compute, for example in cell A1 formula =A1+1
On my AMD laptop I've got 700 (!) instead of 100 on Intel (tried on several PCs). I couldn't tried this on another AMD PC - there is no any near. I tried to turn off multiflow calculation (only 1 core) - the same result - 700.
This issue appears only in case of constants in a formula. If we operate with links we get the same result as on Intel.
What is this could be?

Linux. Can packets pass libpcap by?

I am writing a linux program that controls internet traffic. In other words, how much bytes I have used while some amount of time. I use a Pcap4J for java (implementation of libpcap) and I have question about it. What happens if my program hasn't proceeded a package while a new one has arrived.
1. It slows down the download(upload) rate for the whole OS?
2. It skips a new one, and my program will never know that it passed by?
In other words, I've downloaded the 1G of data on my computer. How many bytes my program get: 100% or it may be passed my program by but still got the destination place!
And give me know if it is a bad idea to write a control traffic app using this lib!
Your application loses packets. In your words, they pass by.
However, if your idea is to have a metric of how many packets went in and out of your system in a given time, there are definitely better ways to achieve it.
On Linux you can just do a script that does something like this:
DEVICE=eth0
RX0=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX0=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
while : ; do
sleep 5
RX1=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX1=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
echo "RX bytes: $(($RX1-$RX0))"
echo "TX bytes: $(($TX1-$TX0))"
RX0=RX1
TX0=TX1
done
You can adjust times or whether is a parameter, I think you'll get the idea.

How to measure GPU usage per process in Windows using python?

I would like to measure the GPU usage per process as done in Windows taskmgr.exe, but I have encountered several problems when attempting to use the pyNVML library. As a result, I have a few questions.
First, is it currently possible to measure the exact GPU usage per process in Windows using Python? I have already tried the nvidia-smi query, but this doesn't seem to show memory used and utilization percent for each process.
Second, if it is possible to measure GPU usage in this way using Python, I would like to measure and show it in a similar fashion as done in the Windows taskmgr.exe of Windows 10.
Here is my code so far:
nvmlInit()
deviceCount = nvmlDeviceGetCount()
#print(deviceCount)
for device_id in range(deviceCount):
hd = nvmlDeviceGetHandleByIndex(device_id)
#print(handle)
cps = nvmlDeviceGetGraphicsRunningProcesses(hd)
for ps in cps :
pp = ps.pid
#print(pp)
try :
name = str(nvmlSystemGetProcessName(ps.pid))
n = name.split("\\")
#print(n[len(n)-1][:-1])
process_name = n[len(n)-1][:-1]
if process_name == 'chrome.exe':
print(process_name, pp, ps.usedGpuMemory)
except:
pass
and my result:
chrome.exe 16688 None
As you can see, this does not reveal the GPU memory usage per process, but I need the information shown in taskmgr's GPU section. (I have no need of visualization.)
My computer specs are Windows 10 pro, GTX 950, i5-6600
If this is impossible in Python at the moment, do you have any other recommendations to automatically collect GPU usage per process.
Thank you.
Check the answer of Jonathan DEKHTIAR Here explaining the reason why it doesn't work.
As a workaround, you can try fetching values from powershell Get-Counter -Counter "\GPU Engine(*)\Utilization Percentage"

Bit Bang with SPI (fwirte, write performance)

I have a bit bang code that allows me to send like 4 megs of data through SPI lines. Its embedded code for custom Hardware using a Linux Kernel.
The problem is that takes a VERY long time to do it (4 hours) this is most likely becase the kernel is doing more stuff. Basically my code is something like this(aprox):
unsigned char data=0xFF;
BB_SPI_Init();
SPI_start();//activates chipselect(enable)
for(i=0;i<8;i++){
if(data & 0x80){
gpio_set_value(SPI_MOSI,1);
}else{
gpio_set_value(SPI_MOSI,0);
}
//send pulse clock
gpio_set_value(SPI_CLK,0);
gpio_set_value(SPI_CLK,1);
data<<=1;
}
SPI_stop();//deactivates chipselect(disable)
So is a very simple bit bang, but i notice that if i use write to send data to the linux gpio handler /sys/class/gpio/gpioXX/value (where XX is any gpio number) it takes 4 hours.
But if i use fwrite() for sending to the same device it takes 3 hours.
BUT, if you use write() only for the enables ( SPI_stop(), and SPI_start()) and fwrite() for sending to MISO, CLK it only takes 1 hour and 30 minutes.
So, with that as a base, could someone explain to me how is that happening? my imagination says that is the way the threads are handled and in every software cycle it resolves 2 threads (fwrite() and write()) instead if was only one of the functions used, but now i'm still investigating, can someone let me know any kind of information? is there a better way to handle this?
FYI
Can't use kernel driver spi because the hardware was connected to gpios and it is a mandatory requirement to use bit bang but i accept any suggestion
Thanks in advance
EDIT
Hey Guys thanks for your comments, it seems that i had a problem (very dumb one) that i created the file descriptor each time that i was going to send data to sys/class/gpio/gpioxx/value so that's why was slow. Also turn off some other programs and the transfer skyrocket to 3 minutes instead of 1 hour 30 minutes (with write()). Thanks and Sorry about it
I think that the spi-bitbang driver is the best solution if you are looking for performance. Doing the bit-bang from user space is a pain because you have at least 3 system calls for each bit of data. A system call is an expensive operation.
FYI Can't use kernel driver spi because the hardware was connected to gpios and it is a mandatory requirement to use bit bang but i accept any suggestion
That's why the spi-bitbang driver exists. You can easily configure the spi-bitbang driver to work with your GPIOs.
Then, once you have a spi-bitbang driver, you can write a char device that accept as input your entire block of data and transfer it in kernel space. With this solution you will get the maximum performance for a bit-bang interface.

Difference between enable_irq_wake and enable_irq

My driver is using irq which can wake up the device, enable_irq_wake is enough or i need to first enable_irq and then set enable_irq_wake.
Looked into the definition of these functions, not able to understand much.
I tried using both the combination it does not seem to have any effect. I mean just use enable_irq_wake and in other case use enable_irq then enable_irq_wake.
Thank you
If you check here, you'll see that enable_irq_wake invokes set_irq_wake_real that does not enable the irq.
Further more, take for example this driver: they enable/disable_irq the irq at open/close, while they enable/disable_irq_wake at suspend/resume.

Resources