creating a .so file for linux/aio_abi.h - linux

I am trying to build a Linux uImage and device tree for the zynq but my computer cant find the aio_abi.h file. I have moved the Linux folder which contains the file into the shared library folder that my system is looking but the file is still not being found. I think this is because there is no .so file for the folder containing the aio_abi file. I installed libaio-dev using the Ubuntu software installer tool because get-apt isn't working on my machine because of a proxy.
Is there a way to directly create a .so file for the folder I need to include or is there something wrong with the libaio-dev install? I have an older libaio1.so file but the older version doesn't contain the aio_abi file required.
Thanks

Related

How to properly install Ghostscript under Linux as shared library

The link on this page (https://www.ghostscript.com/download.html) for Linux x64, gets you a .tgz with an executable binary.
However, while trying to use this binary as an .so, (after renaming it into libgs.so and putting into appropriate place) via Ghost4J, I invariably get errors as follows:
java.lang.UnsatisfiedLinkError: /tmp/jna-100923095/jna3513656669313044092.tmp: cannot dynamically load executable
Once I install the Ghostscript via apt-get install ghostscript, the same code runs fine (as it now loads an .so from /usr/lib/x86_64-linux-gnu/libgs.so.9.22)
Question: which minimal set of files should I put to some folder, so that I could link to Ghostscript dynamic library (.so) successfully, without Ghostscript being installed on the machine/container?
UPD: under Windows, this seems to be possible, the /bin folder of the installation contains both DLL and EXE files; if I put that .dll file into a win32-x86-64 folder under resources, it is being picked up by JNA (via Ghost4j) and Ghostscript instance works fine, even once I remove the "official" installation). I would like to have same behaviour (i.e. self-sufficient, self-containing JAR file) for Linux as well.
Well, I ended up building the shared object myself, using Ubuntu 18.04 installed as WSL 1 distribution, following the guidelines from here: https://www.ghostscript.com/doc/current/Make.htm#Shared_object
These were the exact commands:
./configure --without-luratech --with-system-libtiff --with-drivers=PCLXL
make so
and then, in the sobin folder, you have libgs.so, that works as expected. But sad it's not possible to download it from the official site.

Buildroot tools - adding a user libs from .RPM

I have some task to make linux's bootable image with my own package. This package (named starlet) is a set of .C modules + Makefile. I created the package/starlet directory and added Config.in and starlet.mk; selected in the Buildroot configuration to include my package to build target image.
So, it's works fine...
Now i'm need to build starlet's image with additional library from the zztop-dev package.
zztop-dev package is an .RPM package with set of .H and .C files to build target zztop.a (.so) libraries.
What do I need to do to install zztop-dev.RPM before building STARLET image?
Having the source code for a package stored in a .rpm file is quite uncommon. Buildroot has built-in rules to extract all the most common formats. Using an uncommon format requires you to write extraction rules on your own.
So the first question is whether you can use a more common format that Buildroot has rules for. You probably can access the source code from its original location in a source code repository (git, Subversion, whatever) or a tarball.
If you really need to extract the sources from am .rpm file, then you need to write your own custom extract commands. Look for LIBFOO_EXTRACT_CMDS in the Buildroot user manual.
But if your extract commands call the rpm command to do the extraction then you'll need the rpm tool either installed on your host machine, or packaged as a host package in Buildroot and listed as a dependency of zztop-dev. The former approach is way simpler, but it will force you to have rpm installed on every host machine where you run the build.

debian packaging and package.rules files

I am working on changing machines from the RHEL world over to the debian/ubuntu world, and I am struggling a bit with a packaging problem. I am trying to build a package for Ubuntu 16.4.
I've got an very old pre-compiled application that can only listen through xinetd. I am creating a binary only package similar to what this person was doing: I need my Debian rules file to simply copy files to it's target. I simply need to copy pre-compiled files into directories.
I have no problem getting files in /opt and in /var/log, however I have been trying to get the dpkg to copy the needed setup file into /etc/xinetd.d/
So I have a debian/package.install file something like this:
opt/oldapplication-3.10/* opt/oldapplication-3.10/
var/log/* var/log/
etc/xinetd.d/oldapplication /etc/xinetd.d
The xinetd setup file never makes it to xinetd.d, and trying to look at the dpkg install with debug doesn't give me any hints. The file is definitely in the tarball, it just simply never gets moved.
Looking through the different dh helper applications, I can't see anything that fits, and google does nothing to illuminate the problem.
Do I have to simply move the file over in a postinst script? Is that the only way to solve this, or is there a more "debian" way to do this by creating a file in the dpkg's debian directory? Is there a more generic setup I should be doing to put files into /etc?
Thanks.

Procedure to compile the linux kernel for an embedded device - installing headers files in the system

I am compiling a linux kernel (4.9.15) in my host machine and installing it in an embedded device. All works fine but I have a question about the proper way to install all the include files in the system (/usr/include/linux with updated version.h)
This is how I proceed:
I compile the sources in my host machine.
My compilation script generates boot.tar.gz, modules-4.9.15.tar.gz, linux-4.9.15.tar.gz and linux-headers-4.9.15.tar.gz and copy them to the embedded system
In the embedded system I extract boot.tar.gz (containing System.map-4.9.15, config-4.9.15, initrd.img-4.9.15 and vmlinuz-4.9.15) to /boot folder, the modules-4.9.15.tar.gz to /lib/modules and the source and headers files to /usr/src/
I update the /lib/modules/4.9.15 "source" and "build" links to point to /usr/src/4.9.15 folder
In /usr/src/linux-4.9.15 folder I do make install_headers
I update grub with update-grub2 and reboot
My doubt is how to update the system's /usr/include/linux/ folder. I thought that executing make headers_install would do it but it just installs the include files in the sources folder. Should I copy the generated /usr/src/linux-4.9.15/usr/include/linux folder to the system's /usr/include/linux manually? Is that the proper way to do it?
Any suggestion to do the process in a better way?
Thanks!

create *.gcda files in a desired location using GCOV_PREFIX in linux?

I have a problem in using GCOV_PREFIX environment variable.
Compiler version that I am using on build machine - gcc version 3.4.6 20060404 (Red Hat 3.4.6-3)
I am compiling my source files (*.c) using "fprofile-arcs –ftest-coverage" on build machine which produces an executable, object files (corresponding to each source file) and gcno file (corresponding to each source file) in the following location
/a/b/c/d
/a/b is mounted onto a test machine in the following directory "/tmp/test/a/b". On test machine, when I execute the executable, the same directory structure "a/b/c/d" is expected for creating *.gcda files.
Since it is not available, I get an error of this kind "profiling:/a/b/c/d/xyz.gcda:Cannot open".
But I do not want to create the same directory structure in my test machine.
I want to create a directory (/tmp/gcovfiles) in my test machine and I want gcda files to be created in /tmp/gcovfiles location.
I tried using environment variables GCOV_PREFIX and GCOV_PREFIX_STRIP. But it did not have any effect. May be I am not using it properly.
Could you please help?

Resources