The compressing speed of libjpeg-turbo has no difference with libjpeg in my program - jpeg

My program runs on the android device, and the device is ARM system with NEON supported.
At first I used libjpeg to compress the RGB image(800*480) to jpeg. The speed was about 70ms for per image, but it was too slow for me. Later I found the libjpeg-turbo, seems it can improve the compressing speed with the NEON in ARM.
But after compiling and testing, I found their compressing speed almost the same. And the change of the quality and flag passed to tjCompress2 also took no effect. I have no idea whether something is wrong or something is missing in my program. Codes below :
tjhandle _jpegCompressor = tjInitCompress();
tjCompress2(_jpegCompressor, (unsigned char*)in, PARAM_WIDTH,
PARAM_WIDTH*PERSIZE, PARAM_HEIGHT, PERSIZE,
(unsigned char**)&out, (long unsigned int*)outlen, TJSAMP_444, 100,
TJFLAG_FASTDCT);
tjDestroy(_jpegCompressor);
The jpeg buffer(out) is allocated and freed by myself.
The version of libjpeg-turbo I use is 1.4.2

Many of the SIMD acceleration for libjpeg-turbo was only added in 2.1 (currently newest). On my MacBook M1 (ARM with Neon), libjpeg-turbo 2.1.0 is significantly faster both in compression and decompression than libjpeg 9e.
On libjpeg-turbo official site, you can find a table of SIMD coverage for components of JPEG compression and various architecture and in which version the accelerated code was added.
Based on experiment I carried out recently, the outputs of compression and decompression using libjpeg-turbo are identical backwards all the way to libjpeg 6b. Good job, libjpeg-turbo developers!

As far as I know libjpeg-turbo has SIMD, SSE2, MMX instructions for x86 processor. I've looked at some of the assembly code and I didn't see any code for other types of CPU architectures.
I'm surprised it even worked. I think that it (the library) preserves the original code, that would explain why it was able to even run.
If you're looking for optimizations, you may want to look at optimizations you can do with the libjpeg itself. There are several documentation files, one specifically has instructions for optimizing on the ARM processor. You can also tweak the memory manager. You'll find a lot more information there, than what I can type here.

Related

Which linux OS supports AVX-512 VNNI (Vector Neural Network Instruction)?

I need to deploy an EC2 instance where VNNI (Vector Neural Network Instruction) is supported. There are some EC2 instance types that can support the same.
From AWS:
Intel Deep Learning Boost (Intel DL Boost): A new set of built-in processor technologies designed to accelerate AI deep learning use cases. The 2nd Gen Intel Xeon Scalable processors extend Intel AVX-512 with a new Vector Neural Network Instruction (VNNI/INT8) that significantly increases deep learning inference performance over previous generation Intel Xeon Scalable processors (with FP32), for image recognition/segmentation, object detection, speech recognition, language translation, recommendation systems, reinforcement learning and others. VNNI may not be compatible with all Linux distributions. Please check documentation before using.
It is mentioned that VNNI may not be compatible with all Linux distributions. So, which Linux distribution supports VNNI? I am also not sure as to which documentation this statement refers to.
No kernel support is needed beyond that for AVX-512 (i.e. context switch handling of the new AVX-512 zmm and k registers). AVX-512VNNI instructions just operate on those registers, so there's no new architectural state to save/restore on context switch. https://en.wikichip.org/wiki/x86/avx512_vnni / https://en.wikipedia.org/wiki/AVX-512#VNNI
(Unlike AMX (Advanced Matrix Extensions), new in Sapphire Rapids; that does introduce large new "2D tile" registers, 8x 1KiB, that context-switches need to handle1.)
The other relevant thing for distros are compilers versions, like GCC or clang. https://godbolt.org/z/668rvhWPx shows GCC 8.1 and clang 7.0 (both released in 2018) compiling AVX-512VNNI _mm512_dpbusd_epi32 with -march=icelake-server or -march=icelake-client. Versions before that fail, so those are the minimum versions. (Or clang6.0 for -mavx512vnni, but that doesn't enable other things an IceLake CPU supports, or set tuning options.)
So if you want to use the latest hotness, you need a compiler that's at least somewhat up to date. It's generally a good idea to use a compiler newer than the CPU you're using, so compiler devs have had a chance to tweak tuning settings for it. And code-gen from intrinsics, especially newish instruction-sets like AVX-512, has generally improved over compiler versions, so if you care about performance of the generated code, you typically want a newer compiler version. (Regressions happen for some releases for some loops/functions, and thus for some programs, but on average newer compilers make faster code than old ones. That's a big part of what compiler devs spend time improving.)
You can install a new compiler on an old distro via backport packages or manually. Or you can just use a distro release that isn't old and crusty.
Footnote 1: See also a phoronix article re: non-empty AMX register state keeping the CPU from doing a deep sleep. Normally CPUs fully power down the core in deeper sleep states, stashing registers somewhere that stays powered. I'm guessing that they didn't provide space for AMX tiles to do that, so having state there prevents sleep. So if you're using AMX, you'll want Linux kernel at least 5.19.
In AWS, the instance type and OS combination that worked for me:
EC2 instance type: m5n.large (m5n instance family supports AVX-512 VNNI)
OS: Amazon Linux 2 (other Linux distributions should work as well, as explained by #BasileStarynkevitch and #PeterCordes).
For curious minds: What Linux distribution is the Amazon Linux AMI based on?

Executable gives different FPS values ​in Yocto and Raspbian(everything looks the same in terms of configuration)

In Yocto project, built my project which is running on Raspbian OS. When i run executable, i get half FPS compared to executable running on Raspbian OS.
The libraries i use:
OpenCV
Tensorflow-Lite, Flatbuffer, Libedgetpu
I use Libedgetpu1-std, Tensorflow-lite 2.4.0 on Raspbian and Libedgetpu 2.5.0, Tensorflow-lite 2.5.0 on Yocto.
Thinking that the problem is that the versions or configurations of the libraries are not the same, i followed these steps:
I ran the executable which i built in Raspbian directly in the runtime of the Yocto project.(I have set the required library versions to the same library versions available in raspbian for it to work in runtime.)
But i still got low FPS. Here is how i calculate that i get half the FPS:
I am using TFLite's interpreter invoke function. I set a timer when entering and exiting the function, i calculate FPS over it. I can exemplify like this:
Timer_Begin();
m_tf_interpreter->Invoke();
Timer_End();
Somehow i think the Interpreter Invoke function is running slower on the Yocto side. I checked Kernel versions, CPU speeds, /boot/config.txt contents, USB power consumes of Raspbian and Yocto. However, I couldn't catch anything from anywhere.
Note : Using RPI4 and Coral-TPU(Plugged into USB 2.0).
We spoke with #Paulo Neves. He recommend Perf profiling and i did . In the perf profiling, i noticed that the CPU is running slowly. Although the frequencies are the same.
When i check the "scaling_governor", i saw that it was in "powersave" mode. The problem solved when i switched from "powersave" to "performance" mode from virtual kernel.
In addition, if you want to make the governor change permanent, you need to create a kernel config fragment.

Can we convert elf from a cpu architecture to another, in linux? [duplicate]

How I can run x86 binaries (for example .exe file) on arm?As I see on Wikipedia,I need to convert binary data for the emulated platform into binary data suitable for execution on the targeted platform.but question is:How I can do it?I need to open file in hex editor and change?Or something else?
To successfully do this, you'd have to do two things.. one relatively easy, one very hard. Neither of which you want to do by hand in a hex editor.
Convert the machine code from x86 to ARM. This is the easy one, because you should be able to map each x86 opcode to one or more ARM opcodes. There are different ways to do this, some more efficient than others, but it can be done with a pretty straightforward mapping.
Remap function calls (and other jumps). This one is hard, because monkeying with the opcodes is going to change all the offsets for the jump and return points. If you have dynamically linked libraries (.so), and we assume that all the libraries are available at exactly the same version in both places (a sketchy assumption at best), you'd have to remap the loads.
It's essentially a machine->machine compiler and linker.
So, can you do it? Sure.
Is it easy? No.
There may be a commercial tool out there, but I'm not aware of it.
You can not do this with a binary;note1 here binary means an object with no symbol information like an elf file. Even with an elf file, this is difficult to impossible. The issue is determining code from data. If you resolve this issue, then you can make de-compilers and other tools.
Even if you haven an elf file, a compiler will insert constants used in the code in the text segment. You have to look at many op-codes and do a reverse basic block to figure out where a function starts and ends.
A better mechanism is to emulate the x86 on the ARM. Here, you can use JIT technology to do the translation as encountered, but you approximately double code space. Also, the code will execute horribly. The ARM has 16 registers and the x86 is register starved (usually it has hidden registers). A compilers big job is to allocate these registers. QEMU is one technology that does this. I am unsure if it goes in the x86 to ARM direction; and it will have a tough job as noted.
Note1: The x86 has an asymmetric op-code sizing. In order to recognize a function prologue and epilogue, you would have to scan an image multiple times. To do this, I think the problem would be something like O(n!) where n is the bytes of the image, and then you might have trouble with in-line assembler and library routines coded in assembler. It maybe possible, but it is extremely hard.
To run an ARM executable on an X86 machine all you need is qemu-user.
Example:
you have busybox compiled for AARCH64 architecture (ARM64) and you want to run it on an X86_64 linux system:
Assuming a static compile, this runs arm64 code on x86 system:
$ qemu-aarch64-static ./busybox
And this runs X86 code on ARM system:
$ qemu-x86_64-static ./busybox
What I am curioous is if there is a way to embed both in a single program.
read x86 binary file as utf-8,then copy from ELF to last character�.Then go to arm binary and delete as you copy with x86.Then copy x86 in clip-board to the head.i tried and it's working.

Linux kernel re-compilation too slow

I am compiling the Linux kernel in a VM(virtual box) with 2 out of 4GB and 4 out of 8 CPUs allocated. My initial compilation took around 8-9 hours, and I was using make -j4 optimisation too. Now I added a simple system call to the kernel and just ran the make -j4 and and it has been compiling for the past 3 hours. I thought that after the initial compilation, make would only compile the small changes but it seems to be compiling everything (mostly the drivers). Is there any way I can speed up this compilation process?
For example, is there anyway by which I can disable some of the drivers that I don't really need, for example if I just want to implement a simple system call, I don't really need all the networking drivers, and maybe that would speed it up? i.e. I just want the bare minimum functionality for my kernel to test my system calls.
Compiling the kernel will always take a very long time, unfortunately there's no way around that besides having a really good processor with a lot of multi threading, however in huge projects like this, ccache will help with compilation times tremendously, it's not perfect, but far better than just compiling objects.
You won't see the difference at the initial compilation, but it will speed up recompilation by using the cache it has generated instead of compiling most of what already has been compiled before.

Dectecting CPU feature support (Eg sse2, fma4 etc)

I have some code that depends on CPU and OS support for various CPU features.
In particular I need to check for various SIMD instruction set support.
Namely sse2, avx, avx2, fma4, and neon.
(neon being the ARM SIMD feature. I'm less interested in that; given less ARM end-users.)
What I am doing right now is:
function cpu_flags()
if is_linux()
cpuinfo = readstring(`cat /proc/cpuinfo`);
cpu_flag_string = match(r"flags\t\t: (.*)", cpuinfo).captures[1]
elseif is_apple()
sysinfo = readstring(`sysctl -a`);
cpu_flag_string = match(r"machdep.cpu.features: (.*)", cpuinfo).captures[1]
else
#assert is_windows()
warn("CPU Feature detection does not work on windows.")
cpu_flag_string = ""
end
split(lowercase(cpu_flag_string))
end
This has two downsides:
It doesn't work on windows
I'm just not sure it is correct; it it? Or does it screw up, if for example the OS has a feature disabled, but physically the CPU supports it?
So my questions is:
How can I make this work on windows.
Is this correct, or even a OK way to go about getting this information?
This is part of a build script (with BinDeps.jl); so I need a solution that doesn't involve opening a GUI.
And ideally one that doesn't add a 3rd party dependency.
Extracting the information from GCC somehow would work, since I already require GCC to compile some shared libraries. (choosing which libraries, is what this code to detect the instruction set is for)
I'm just not sure it is correct; it it? Or does it screw up, if for example the OS has a feature disabled, but physically the CPU supports it?
I don't think that the OS has any say in disabling vector instructions; I've seen the BIOS being able to disable stuff (in particular, the virtualization extensions), but in that case you won't find them even in /proc/cpuinfo - that's kind of its point :-) .
Extracting the information from GCC somehow would work, since I already require GCC to compile some shared libraries
If you always have gcc (MinGW on Windows) you can use __builtin_cpu_supports:
#include <stdio.h>
int main()
{
if (__builtin_cpu_supports("mmx")) {
printf("\nI got MMX !\n");
} else
printf("\nWhat ? MMX ? What is that ?\n");
return (0);
}
and apparently this built-in functions work under mingw-w64 too.
AFAIK it uses the CPUID instruction to extract the relevant information (so it should reflect quite well the environment your code will run in).
(from https://stackoverflow.com/a/17759098/214671)

Resources