Will update Ubuntu 22.04 kernel, with Alder Lake processor resolve parallelism problems , and without breaking my Nvidia drivers? - linux

I recently bought a dell XPS 15 with a i9 12900 HK, and downloaded Ubuntu 22.04 LTS as an OS.
I coded a simple OpenMP program, that should have a linear speedup in the number of threads (the code is correct because it behaves as expected when I run it on a cluster), but on my laptop, it stops the speedup at 6 threads, even if my processor has 20 threads. I made some research, and read that kernel 5.15 is not optimised for last Intel processors because it makes a bad use of P and E cores.
But I also read that it may be dangerous to update the kernel to a newer version like 5.17 or 5.18 because my GTX 3050 Ti drivers may not be compatible with this kernel.
Can I update the kernel safely ? Will it resolves my parallelism problem? What method should I use to update my kernel?
I tried to look to forums and docs, but lots of available documentation are from third parties and I don't know if I can trust them.

Related

Emulating a heterogenous system, like an ARM Processor with P and E Cores

I'm trying to emulate a processor which consists processor cores with different max frequencies per core, like ARM processors or newer Intel processors which have a couple of Performance Cores and Efficiency Cores.
I tried it with Qemu, but I only didn't get far, the only thing I found was qemu-system-aarch64 where you can configure cores per die and die count using nema but i did't find a possibilty to change frequency or core architechture for a specific die. Is it even possible with qemu or is there a alternative? Preferably the emulation should be able to run linux.
For clarification, I'm trying to show that on a heterogeneus system i.e. a processor with different core speeds a certain framework works better then another one.
Thanks to Nate I found Intel Simics which is able to simulate heterogeneous systems.

How to utilize the High Performance cores on Apple Silicon

I have developed a macOS app which is heavily relying on multithreading (a call center simulator). It runs fine on my iMac 2019 and fills up all cores nicely. In my test scenario it simulates app. 1.4 mio. telephone calls in total in 100 iterations, each iteration as a dispatch item on a parallel dispatch queue.
Now I have bought a new Mac mini with M1 Apple Silicon and I was eager to see how the performance develops on that test machine. Well, it’s not bad but not as good as I expected:
System
Duration
iMac 2019, Intel 6-core i5, 3.0 GHz, Catalina macOS 10.15.7
19.95 s
Mac mini, M1 8-core, Big Sur macOS 11.2, Rosetta2
26.85 s
Mac mini, M1 8-core, Big Sur macOS 11.2, native ARM
17.07 s
Investigating a little bit further I noticed that at the start of the simulation all 8 cores of the M1 Mac are filled up properly but after a few seconds only the 4 high efficiency cores are used any more.
I have read the Apple docs „Optimize for Apple Silicon with performance and efficiency cores“ and double checked that the dispatch queue for the iterations is set up properly:
let simQueue = DispatchQueue.global(qos: .userInitiated)
But no success. After a few seconds of running the high performance cores are obviously not utilized any more. I even tried to set up the queue with qos set to .userInteracive up that didn’t help either. I also flagged the dispatch items with proper qos but that didn’t change anything. It looks to me that other apps (e.g. XCode) do utilize the high performance cores even for a longer time.
Does anybody know how to force a M1 Mac to utilize the high performance cores?
"M1 8 core" is really "M1 4 performance + 4 power saving cores". I expect it to have be a bit more performance than an Intel 6 core, but not much. Exactly has you see, 15% faster than six Intel cores or about as fast as 7 Intel cores would be. The current M1 chips are low end processors. "A bit better than Intel six cores" is quite good.
Your code must be running on the performance cores, otherwise there would be no chance at all to come close to the Intel performance. In that graph, nothing tells you which cores are used.
What happens most likely is that all cores start running, each trying to do one eighth of the work, and after about 8 seconds the performance cores have their work done. Then the power saving cores move their work to the performance cores. And you are just misinterpreting the image as only low performance cores doing the work.
I would guess that Apple has put a preference on using efficiency cores over performance for many reasons. Battery life being one, and most likely thermal reasons as well. This is the big question mark with a SoC that originally was designed for smartphones and tablets. MacOS is a much heavier OS then IOS or iPad OS. Apple most likely felt that performance cores were only needed in the cases where maximum throughput was needed. No doubt, I think some including myself with a M1 Mac Mini would like a way to adjust this balance between efficiency and performance cores. Personally overall, I would prefer all cores be capable of switching between efficiency and performance such as in Intel's Speed shift technology. This may come along with the M1's advancements in terms of Mac Pro models and other Pro models.

slower performance of kernel level memcpy in 4.4.x kernel on powerpc architecture

I have a board with PowerPC processor and some DSP cores.
I was running 2.6.34 version linux kernel based Windriver release on it. Recently I built 4.4.106 version kernel based OS for my board and booted it up. With this I have problem in booting up my DSP cores.
On debugging, I figured out that memcpy() ( Kernel space memcpy , not the user space libc's memcpy ) in 4.4.106 kernel taking slightly more time than that in 2.6.34 .
Further I replaced the memcpy functionality provided in arch/powerpc/lib/memcpy_64.S with older one in 2.6.x ( which is a bad idea , I know) but still I did not see any performance improvement.
Can anyone let me know what all other factors that could be influencing the performance of memcpy in kernel space.
I could think of preemption factor , which I am testing right now. I disabled all configurations related to kernel stats. Is there any thing else I need to look into to get the needed performance improvement.

Performance check between shared cluster and laptop with Intel(R)Core™ i7

I am not really familiar with shared clusters, but I am assuming performance should not differ much in terms of completing a single task when compared with a laptop processor. I have a C++ code which I ran on my laptop with Intel(R)Core™ i7-4558U 2.80 GHz CPU and 16.0 GB RAM, with the operating system of 64 bit Windows 10. On the other hand, I have results of the same code from a publication which belong to the tests conducted on a shared cluster with Intel Xeon 2.3 GHz CPU and 4 GB memory limit with Linux operating system. The program uses CPLEX as the solver: my laptop has IBM Cplex 12.7 whereas previous runs used IBM CPLEX 12.4 (Cplex, 2012). My results seem to take 300 times more than the reported results of the previous run. Does this much difference make sense? If so what could be the driver behind it?
This could be attributed to performance variability (see, for example, section 5 of the MIPLIB 2010 paper here). In a nutshell, minor differences in problem formulation (e.g., order of constraints, input format, etc.), or running on different platforms, can have a great effect on the time to solve. With CPLEX 12.7, you can use the interactive to help you evaluate variability.

CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN not supported in the Intel/AMD OpenCL CPU runtime

Conditions:
I installed AMD OpenCL version AMD-APP-SDK-v2.8-lnx64 and Intel OpenCL version *intel_sdk_for_ocl_applications_xe_2013_r2_sdk_3.1.1.11385_x64* (version identification couldn't be more complex) according to the description on an HPC server with a dual socket Xeon E5-2650, Xeon Phi coprocessor, 64GB host memory and Red Hat Enterprise Server 6.4.
Problem description:
I would like to do device fission with OpenCL to get around the NUMA issue. Unfortunately the device (Intel CPU) or maybe the Linux kernel doesn't seem to support CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN. I tried both Intel OpenCL and AMD OpenCL. Although AMD OpenCL device query says that it supports the affinity domain option, actually it doesn't: when I try to run a code with CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN the clCreateSubDevices() function returns with -30 error code. I guess this is a bug in the current Intel OpenCL driver, according to a forum post.
Potential solution:
I thought that if I could select the first 16 parallel compute cores (8 cores + 8 hyper threads) (out of the total 32 parallel compute cores) those would map to the first socket. Unfortunately Intel OpenCL randomly distributes the 16 parallel compute cores across the 32 cores. AMD OpenCL on the other hand select the first 16 parallel compute cores, but the OpenCL compiler does a poor job on the kernel I'm running. So the no free lunch theorem applies here as well.
Questions:
Is there any way to specify which parallel compute cores the OpenCL should use for the computations?
Is there any way to overcome this NUMA issue with OpenCL?
Any comments regarding experiences with NUMA affinity are welcome.
Thank you!
UPDATE
A partial workaround, only applicable to single-socket testing:
(In Linux) Disable all cores from a NUMA node, so OpenCL ICD can only choose from hardware threads of the other NUMA node. Eg. on a 2 socket 32 HTT system:
sudo sh -c "echo 0 > /sys/devices/system/cpu/cpu31/online"
....
sudo sh -c "echo 0 > /sys/devices/system/cpu/cpu16/online"
I'm not sure if this hacking has no side effect, but so far it seems to work (for testing at least).

Resources