I'm attempting my first electron App. Does anyone know why i get this error AVDCreateGPUAccelerator: Error loading GPU renderer in the terminal when i do npm start? It doesn't seem to effect anything yet. I've gotten it right from the beginning of my install using the quick start guide. Will it effect me down the road?
mac pro late 2013
2.7 GHz 12-Core Intel Xeon E5
128 GB 800 MHz DDR3 ECC
AMD FirePro D700 6144 MB
Most likely the AVDCreateGPUAccelerator error is because the Xeon CPUs used in the Mac Pros do not have a hardware decoder. Core i3/5/7 CPUs in Mac mini/iMac/Mac Book Pro) should work fine.
For a solution to this, please refer to the following answer;
https://stackoverflow.com/a/57405820/10868666
Related
I recently bought a dell XPS 15 with a i9 12900 HK, and downloaded Ubuntu 22.04 LTS as an OS.
I coded a simple OpenMP program, that should have a linear speedup in the number of threads (the code is correct because it behaves as expected when I run it on a cluster), but on my laptop, it stops the speedup at 6 threads, even if my processor has 20 threads. I made some research, and read that kernel 5.15 is not optimised for last Intel processors because it makes a bad use of P and E cores.
But I also read that it may be dangerous to update the kernel to a newer version like 5.17 or 5.18 because my GTX 3050 Ti drivers may not be compatible with this kernel.
Can I update the kernel safely ? Will it resolves my parallelism problem? What method should I use to update my kernel?
I tried to look to forums and docs, but lots of available documentation are from third parties and I don't know if I can trust them.
I am looking to upgrade an older machine we have at our lab to use for deep learning (PyTorch) in addition to my personal work station. Its an older Dell work station but the relevant specs are as follows:
PSU: 950W
RAM: 64 GB DDR4 ECC
CPU: Xeon Bronze 3104 #1.7 GHz
It even has an older NVIDIA GPU I can use for display output when the A4000 is fully loaded like I currently do on my personal setup.
Through the university we can acquire a RTX A4000 (I know not best price to performance), which is basically a 3070ti with more VRAM. I am concerned that the low clock speeds may cause a bottle neck. Does anyone have experience with a similar configuration?
Thank you for the help!
I have been facing an intriguing problem lastly.
I am working on a project with a pretty heavy front in Angular JS with a hundred of Jest tests. I have 16 Go of ram but the project is so heavy that sometimes it fills up completely the ram and often the computer cannot handle the project running plus a yarn test at the same time (which takes up to 3 to 4 Go of ram) or a cypress workflow test without big latency problems.
To avoid big freezes (up to several minutes) and crashes, I increased the swap to 16 Go.
That being said, for various reasons I had to work on the project on Windows 10 and faced none of these problems.
Everything runs smoothly, the graphical interface doesn't lag even with screen sharing even-though the ram is also completely filled up and the CPU at 100%.
I am even able to run 20 yarn test at the same time without much lag which seems completely impossible on Linux even with the increased swap.
I've seen that windows use ram compression by default and not linux but I only had up to 549 Mo of compressed ram during my comparisons.
I firstly though that it could be a problem with gnome which is known to be heavy and sometimes buggy but I also tested it with KDE and have the same results.
I also heard that windows allocate special resources to the graphical environment where linux may treat it like any other process but that alone cannot explain all the problems because the whole computer freezes on linux and not in windows.
So I'm starting to wonder if there is something about the memory or process management that windows do significantly better than linux.
My config :
Computer model : Dell XPS-15-7590
Processor : Intel core i7 9750H, 2,6 GHz, 4,5 GHz turbo max (6 cores, 12 threads)
RAM : 16 Go
Graphic card : GTX 1650M
Screen : 4K 16:9
SSD : NVME 512 Go
I was facing the same issue on Ubuntu 22.04 with 16GB RAM and Intel i5-12400 Processor
My solution was to limit the number of max workers on jest config
"maxWorkers": 4
I got a linux server with Intel(R) Xeon(R) Silver 4110 CPU # 2.10GHz process which is up to 3.00Ghz due to Intel Turbo Boost technology.
However, when im checking cpuinfo it says that are cores are working on 2095.078Mhz like always.
There are no up's and down's - no matter what (heavy process on server etc).
(im checking it by running cat /proc/cpuinfo | grep "MHz")
For example my laptop shows different MHz for every command run.
There is also no scaling_governor setting (i wanted to set performance mode).
Running cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor tells that cpufreq folder doesnt exists at all.
Im just curious if my processor is hitting these 3.0 Turbo cuz i dont feel like it does.
I feel like im lacking some kind of drivers for frequency scaling.
It seems likely that this is your answer. If you can get the cpufreq driver loaded, you can consult the kernel docs to experiment and get the performance you want: kernel.org cpufreq doc.
Conditions:
I installed AMD OpenCL version AMD-APP-SDK-v2.8-lnx64 and Intel OpenCL version *intel_sdk_for_ocl_applications_xe_2013_r2_sdk_3.1.1.11385_x64* (version identification couldn't be more complex) according to the description on an HPC server with a dual socket Xeon E5-2650, Xeon Phi coprocessor, 64GB host memory and Red Hat Enterprise Server 6.4.
Problem description:
I would like to do device fission with OpenCL to get around the NUMA issue. Unfortunately the device (Intel CPU) or maybe the Linux kernel doesn't seem to support CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN. I tried both Intel OpenCL and AMD OpenCL. Although AMD OpenCL device query says that it supports the affinity domain option, actually it doesn't: when I try to run a code with CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN the clCreateSubDevices() function returns with -30 error code. I guess this is a bug in the current Intel OpenCL driver, according to a forum post.
Potential solution:
I thought that if I could select the first 16 parallel compute cores (8 cores + 8 hyper threads) (out of the total 32 parallel compute cores) those would map to the first socket. Unfortunately Intel OpenCL randomly distributes the 16 parallel compute cores across the 32 cores. AMD OpenCL on the other hand select the first 16 parallel compute cores, but the OpenCL compiler does a poor job on the kernel I'm running. So the no free lunch theorem applies here as well.
Questions:
Is there any way to specify which parallel compute cores the OpenCL should use for the computations?
Is there any way to overcome this NUMA issue with OpenCL?
Any comments regarding experiences with NUMA affinity are welcome.
Thank you!
UPDATE
A partial workaround, only applicable to single-socket testing:
(In Linux) Disable all cores from a NUMA node, so OpenCL ICD can only choose from hardware threads of the other NUMA node. Eg. on a 2 socket 32 HTT system:
sudo sh -c "echo 0 > /sys/devices/system/cpu/cpu31/online"
....
sudo sh -c "echo 0 > /sys/devices/system/cpu/cpu16/online"
I'm not sure if this hacking has no side effect, but so far it seems to work (for testing at least).