I want to use krita on ubuntu 20.4 to open a very very large image (26GB). In the opening process, one times only one core of all my cpu (xeon W-3323 CPU # 3.5GHz) was 100% used.
How can I set it to the best performance for all the core or multi core? Or it is no importent to do that?
Picture to describe the question.
Related
I'm trying to emulate a processor which consists processor cores with different max frequencies per core, like ARM processors or newer Intel processors which have a couple of Performance Cores and Efficiency Cores.
I tried it with Qemu, but I only didn't get far, the only thing I found was qemu-system-aarch64 where you can configure cores per die and die count using nema but i did't find a possibilty to change frequency or core architechture for a specific die. Is it even possible with qemu or is there a alternative? Preferably the emulation should be able to run linux.
For clarification, I'm trying to show that on a heterogeneus system i.e. a processor with different core speeds a certain framework works better then another one.
Thanks to Nate I found Intel Simics which is able to simulate heterogeneous systems.
I have been facing an intriguing problem lastly.
I am working on a project with a pretty heavy front in Angular JS with a hundred of Jest tests. I have 16 Go of ram but the project is so heavy that sometimes it fills up completely the ram and often the computer cannot handle the project running plus a yarn test at the same time (which takes up to 3 to 4 Go of ram) or a cypress workflow test without big latency problems.
To avoid big freezes (up to several minutes) and crashes, I increased the swap to 16 Go.
That being said, for various reasons I had to work on the project on Windows 10 and faced none of these problems.
Everything runs smoothly, the graphical interface doesn't lag even with screen sharing even-though the ram is also completely filled up and the CPU at 100%.
I am even able to run 20 yarn test at the same time without much lag which seems completely impossible on Linux even with the increased swap.
I've seen that windows use ram compression by default and not linux but I only had up to 549 Mo of compressed ram during my comparisons.
I firstly though that it could be a problem with gnome which is known to be heavy and sometimes buggy but I also tested it with KDE and have the same results.
I also heard that windows allocate special resources to the graphical environment where linux may treat it like any other process but that alone cannot explain all the problems because the whole computer freezes on linux and not in windows.
So I'm starting to wonder if there is something about the memory or process management that windows do significantly better than linux.
My config :
Computer model : Dell XPS-15-7590
Processor : Intel core i7 9750H, 2,6 GHz, 4,5 GHz turbo max (6 cores, 12 threads)
RAM : 16 Go
Graphic card : GTX 1650M
Screen : 4K 16:9
SSD : NVME 512 Go
I was facing the same issue on Ubuntu 22.04 with 16GB RAM and Intel i5-12400 Processor
My solution was to limit the number of max workers on jest config
"maxWorkers": 4
I have a dual core board running Linux in which I installed PJSIP (VoIP software). I want to add an echo/noise canceler algorithm but I don't want it to work on the same core as PJSIP.
How can I split the use of the cores between the two applications?
It is called CPU affinity. You can set it from the command line using taskset(1) or from your application using sched_setaffinity(2) sched_getaffinity(2).
The term you are looking for is affinity. http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html or http://www.glennklockwood.com/hpc-howtos/process-affinity.html. That being said, if you are using a "slow" cpu, you are probably going to look at a real time scheduler (SCHED_FIFO, SCHED_RR or SCHED_DEADLINE) and if you are using a "fast" cpu you probably don't need to worry about affinity. The probability of you being in the "middle" where affinity will matter but scheduler won't is pretty low.
Here is my situation:
my company need to run tests on tons of test samples. But if we start a single process on a windows PC machine, this test could last for hours, even days. so we try to split the test set and start a process to test each one of the slices on a multi-core linux server.
we expect a linear performance improvement for the server solution, but the truth is we could only observe a 2~3 times improvement when the test task finished by 10~20 processes.
I tried several means to locate the problem:
disable hyper-threading;
use max-performance power policy
use taskset to pin each process on different core
but no luck, the problem remains.
Why does this happen? which is the root cause, our code, OS or hardware?
here is the info of my pc and server:
PC: os: win10; cpu: i5-4570, 2 physical core; mem : 16gb
server: os: redhat 6.5 cpu: E5-2630 v3, 2 physical core; mem : 32gb
Edit:
About CPU: the server has 2 processors, and each of them has 8 physical cores. check this link for more information.
About My Test: it's handwriting recognition related(that's why it's a cpu-sensitive task).
About IO: the performance check points do not involve much IO if logging doesn't count.
we expect a linear performance improvement for the server solution,
but the truth is we could only observe a 2~3 times improvement when
the test task finished by 10~20 processes.
This seems very logical considering there are only 2 cores on the system. Starting 10-20 processes will only add some overhead due to task switching.
Also, I/O could be a bottleneck here too, if multiple processes are reading from disk at the same time.
Ideally, the number of running threads should not exceed 2 x the number of cores.
A program I'm working on needs to process certain objects upon arrival from network in real-time. The throughput is good, but I have occasional drops in the input queue due to unexpected delays.
My analysis shows that most probably the source of the delay is outside my program; something like another process being scheduled on my process's CPU core (I set the affinity of the process to a certain core) or a hardware interrupt arriving (perhaps a network interrupt).
My problem is I don't know the source of the delay for sure. Is there a tool or a method to find how a CPU core was used exactly during a certain period of time? (Like for example telling me that core 0 was used by process 19494 99.1 percent of the time, process 20001 0.8 percent of the time and process 8110 0.1 percent of the time.)
I use Ubuntu 14.04 Server Edition on an HP server with a Xeon CPU.
could be CPU, diskspeed, networkspeed or memory.
Memory usage and CPU is easy to spot using htop . (use the sort option, F6)
HD speed could be an issue. for example if you use low-energy disks (they slow down when not in use). Do you have a database running on the same system?
use iotop , it might give a clue.