Build libpng without PNG_READ_eXIf_SUPPORTED for linux - linux

I need to build libpng, but without #define PNG_READ_eXIf_SUPPORTED in pnglibconf.h
I've read comments from pnglibconf.dfa, and here are some ways of disabling features, however I didn't manage to make what I want using them.
The problem is in that, build process is performed on build server, so I can't change any files inside of libpng submodule. Here is how server works:
Download clone sources from git
Generate makefile by running cmake ..
Run make command.
Thus I have libnpg, but with included PNG_READ_eXIf_SUPPORTED option.
Libpng is a submodule of my project, so it checked out by build server automatically so I can't change pnglibconf manually.
As it said in pnglibconf.dfa file:
There are three ways of disabling features, in no particular order:
1) Create 'pngusr.h', enter the required private build information
detailed below and #define PNG_NO_<option> for each option you
don't want in that file in that file. You can also turn on options
using PNG_<option>_SUPPORTED. When you have finished rerun
configure and rebuild pnglibconf.h file with -DPNG_USER_CONFIG:
make clean
CPPFLAGS='-DPNG_USER_CONFIG' ./configure
make pnglibconf.h
pngusr.h is only used during the creation of pnglibconf.h, but it
is safer to ensure that -DPNG_USER_CONFIG is specified throughout
the build by changing the CPPFLAGS passed to the initial ./configure
I tried to do what is written here. I run cmake .. -DCMAKE_C_FLAGS="-DPNG_USER_CONFIG -I/home/me/dev/include" where /home/me/dev/include - is a path to pngusr.h file
Then I run make command. However, PNG_READ_eXIf_SUPPORTED is still present in generated (by make command pnglibconf.h file).
So my main question is how to make libpng without PNG_READ_eXIf_SUPPORTED option?

It remains unclear to me whether and to what extent the specific customization mechanism you are trying to use works in the version of libpng you are trying to use. But it looks like there's a simpler way. Just below the excerpt you posted, in the same file, is the second (of three) alternatives:
2) Add definitions of the settings you want to change to CPPFLAGS;
for example:
-DPNG_DEFAULT_READ_MACROS=0
(lightly formatted). I'm not in a good position to test that on the CMake-based build system, but it seems to work like a charm in the Autotools build system. From examining and comparing the two, I think it will work for CMake, too. In particular, you would want to run
cmake .. -DCMAKE_CPP_FLAGS="-DPNG_NO_READ_eXIf"
for your particular case.
Note, by the way, that the CPP (i.e. preprocessor) flags are the right place for an option such as you are specifying (for -DPNG_USR_CONFIG in your original attempt, too). In practice, though, they probably still work in the C compiler flags.

Related

how to add creating protobuf python files [duplicate]

I'm trying to use add_custom_command to generate a file during the build. The command never seemed to be run, so I made this test file.
cmake_minimum_required( VERSION 2.6 )
add_custom_command(
OUTPUT hello.txt
COMMAND touch hello.txt
DEPENDS hello.txt
)
I tried running:
cmake .
make
And hello.txt was not generated. What have I done wrong?
The add_custom_target(run ALL ... solution will work for simple cases when you only have one target you're building, but breaks down when you have multiple top level targets, e.g. app and tests.
I ran into this same problem when I was trying to package up some test data files into an object file so my unit tests wouldn't depend on anything external. I solved it using add_custom_command and some additional dependency magic with set_property.
add_custom_command(
OUTPUT testData.cpp
COMMAND reswrap
ARGS testData.src > testData.cpp
DEPENDS testData.src
)
set_property(SOURCE unit-tests.cpp APPEND PROPERTY OBJECT_DEPENDS testData.cpp)
add_executable(app main.cpp)
add_executable(tests unit-tests.cpp)
So now testData.cpp will generated before unit-tests.cpp is compiled, and any time testData.src changes. If the command you're calling is really slow you get the added bonus that when you build just the app target you won't have to wait around for that command (which only the tests executable needs) to finish.
It's not shown above, but careful application of ${PROJECT_BINARY_DIR}, ${PROJECT_SOURCE_DIR} and include_directories() will keep your source tree clean of generated files.
Add the following:
add_custom_target(run ALL
DEPENDS hello.txt)
If you're familiar with makefiles, this means:
all: run
run: hello.txt
The problem with two existing answers is that they either make the dependency global (add_custom_target(name ALL ...)), or they assign it to a specific, single file (set_property(...)) which gets obnoxious if you have many files that need it as a dependency. Instead what we want is a target that we can make a dependency of another target.
The way to do this is to use add_custom_command to define the rule, and then add_custom_target to define a new target based on that rule. Then you can add that target as a dependency of another target via add_dependencies.
# this defines the build rule for some_file
add_custom_command(
OUTPUT some_file
COMMAND ...
)
# create a target that includes some_file, this gives us a name that we can use later
add_custom_target(
some_target
DEPENDS some_file
)
# then let's suppose we're creating a library
add_library(some_library some_other_file.c)
# we can add the target as a dependency, and it will affect only this library
add_dependencies(some_library some_target)
The advantages of this approach:
some_target is not a dependency for ALL, which means you only build it when it's required by a specific target. (Whereas add_custom_target(name ALL ...) would build it unconditionally for all targets.)
Because some_target is a dependency for the library as a whole, it will get built before all of the files in that library. That means that if there are many files in the library, we don't have to do set_property on every single one of them.
If we add DEPENDS to add_custom_command then it will only get rebuilt when its inputs change. (Compare this to the approach that uses add_custom_target(name ALL ...) where the command gets run on every build regardless of whether it needs to or not.)
For more information on why things work this way, see this blog post: https://samthursfield.wordpress.com/2015/11/21/cmake-dependencies-between-targets-and-files-and-custom-commands/
This question is pretty old, but even if I follow the suggested recommendations, it does not work for me (at least not every time).
I am using Android Studio and I need to call cMake to build C++ library. It works fine until I add the code to run my custom script (in fact, at the moment I try to run 'touch', as in the example above).
First of,
add_custom_command
does not work at all.
I tried
execute_process (
COMMAND touch hello.txt
)
it works, but not every time!
I tried to clean the project, remove the created file(s) manually, same thing.
Tried cMake versions:
3.10.2
3.18.1
3.22.1
when they work, they produce different results, depending on cMake version, one file or several. This is not that important as long as they work, but that's the issue.
Can somebody shed light on this mystery?

How to tell if configure and make support out-of-tree builds?

I often need to build common link libraries like zlib, libpng, jpeglib, freetype, etc. for many different architectures. I prefer to do out-of-tree builds then, like so:
mkdir build_linux_x64
cd build_linux_x64
../configure
make
This usually works fine but now I have read that this will only work if the following condition is met: "The project must be enabled for out-of-tree builds, typically with the user of VPATH if using make" (Source)
This leads me to the question: How can I tell if a project is enabled for out-of-tree builds? Will configure or make just fail if the project isn't enabled for out-of-tree builds or how should I tell?
If the out-of-tree build works, then you know it works :). When they say "enabled" they don't mean there's some switch or configuration option that the project has to turn on. They mean that the author of the package needs to have written their Makefile.am (or Makefile.in if they don't use automake) files to work correctly when run out-of-tree. There's no way to know whether these files are written correctly except by trying it out.
If you try it out and it doesn't work you should file a bug with the package.
Note that the standard method of creating source distribution packages with autotools forces the use of out-of-tree builds, so if they're creating their source distribution using the standard methods then it will definitely build out-of-tree correctly.

Making os independent configure file which checks for curl dependency

I am making a configure.ac file which checks for library dependency.
The complete code is,
AC_CONFIG_AUX_DIR([build-aux])
AC_INIT([myprogram], [0.1], [])
AM_INIT_AUTOMAKE
AC_PROG_CC
AC_CHECK_LIB([curl], [curl_easy_setopt], [echo "libcurl library is present" > /dev/tty], [echo "libcurl library is not present" > /dev/tty] )
AC_CHECK_LIB([sqlite3], [sqlite3_open], [echo "sqlite3 library is present" > /dev/tty], [echo "sqlite library is not present" > /dev/tty] )
AC_CHECK_LIB([pthread], [pthread_create], [echo "pthread library is present" > /dev/tty], [echo "pthread library is not present" > /dev/tty] )
AC_CHECK_LIB([crypto], [SHA256], [echo "crypto library is present" > /dev/tty], [echo "crypto library is not present" > /dev/tty] )
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
"myprogram" is a program which needs to be installed in numerous user pcs.So, dependency check needs to be done in the begining, to find whether those four libraries are installed.
In the systems where, /usr/lib/i386-linux-gnu/libcurl.so is there, it is giving the message "libcurl library is present", when I run the configure file. But, in the systems where /usr/lib/i386-linux-gnu/libcurl.so.1.0 or something similar is present, it is telling that libcurl is not present. If I create a soft link to libcurl.so , then it is telling correctly that libcurl is present.
ln -s /usr/lib/i386-linux-gnu/libcurl.so.1.0.0 /usr/lib/i386-linux-gnu/libcurl.so.Same holds good for other libraries as well.
Actually, I want to automate this process. Is there a way to do this, without manually making a soft link?.I mean, by making changes in the configure.ac file itself, so that configure will run in any machine without the need for making soft link.
While installing a library, the installer program will typically create a symbolic link from the library's real name(libcurl.so.1.0.0) to its linker name(libcurl.so) to allow the linker to find the actual library file.But it is not always true.Sometimes it will not create the linker name.That is why these complications are happening.So the program which checks for the linker name, thinks that the library is not installed.
In systems where, /usr/lib/i386-linux-gnu/libcurl.so is there, it is giving the message "libcurl library is present", when I run the configure file. But, in the systems where /usr/lib/i386-linux-gnu/libcurl.so.1.0 or something similar is present, it is telling that libcurl is not present.
Right, this is the behavior I would expect. What's going on here is that AC_CHECK_LIB emits a program with the symbol you gave it to try and link (in this case curl_easy_setopt), does a compilation step and a link step to make sure the linker can link. On a typical Linux distro you'll want to make sure that some package called libcurl-dev (or something like that) is installed, so you'll have the header files and the libcurl.so symlink installed.
But I want to automate this process. Is there a way to do this, without manually making a soft link?
Installation of the libcurl-dev package can be easily automated. It can be accomplished several ways, depending on how you want to do it. Linux packaging systems (e.g. rpmbuild, debhelper, etc.) have ways of pulling in build dependencies before building if they aren't installed. Configuration management tools that you use to set up the build machine (e.g. ansible, SaltStack, etc.) could install it. The dependency should be listed in the release documentation at a minimum, so that if someone who has no access to these tools (or doesn't care to use them) can figure it out and build.
I wouldn't create a symlink in configure.ac -- it would likely break any future install of libcurl-dev. Furthermore you would have to run configure with elevated privileges (e.g. sudo) to create the link.
While installing a library, the installer program will typically create a symbolic link from the library's real name(libcurl.so.1.0.0) to its linker name(libcurl.so) to allow the linker to find the actual library file.But it is not always true.
Actually, I don't ever remember seeing anything like this. Typically when a DSO gets installed to the ldconfig "trusted directories" (e.g. /usr/lib, etc.) ldconfig gets run so the real library (e.g. libcurl.so.1.0.0) gets a symlink (libcurl.so.1) in the same directory, but not the development symlink (libcurl.so).
EDIT: Adding responses to comments
But why ./configure also expects development symlink s(libcurl.so, libcrypto.so etc)
Because configure can be told to run the linker, as you discovered with AC_CHECK_LIB, and if those symlinks aren't there, the link will fail.
configure checks whether the binary can run in the system, and not whether a program which uses these libraries can be build.
configure also has runtime tests as well as compile and link time tests, so it can to some limited testing if the output of compilation can run. configure's primary role is to ensure that prerequisites are installed/configured so make will work, so testing that tools, headers, libraries are installed and work in some fashion is what configure mostly does. The runtime tests will not work in some environments (cross-compilation), so lots of packages don't use them.
If I am not wrong, ./configure cannot be used for checking whether a binary can run in a system, as it is used in the case of building a program only.
configure can do some runtime testing of things configure has built as mentioned in the link above (e.g. AC_RUN_IFELSE).
If ./configure succeeds, then the binary can run in the machine.
But reverse is not true. That is , evenif ./configure fails, the binary may run, as it does not depened on development symlink(eg: libcurl.so).Am I right ?
Which binary are you referring to? The test created as part of AC_RUN_IFELSE or the output of make? If configure suceeeds, the output of make still might not work. That's what make check is for. If configure fails, it's likely make won't work, and you won't get to the part where you can test the output of make.
If the scenario is a missing libcurl.so, and configure fails to link the AC_TRY_LINK test, how's that same link step going to work for your executable then, because it's also going to depend on libcurl.so for the link step? It does depend on that file (just for the link step), because you may have multiple libcurl.so.x libraries installed.
By binary...I mean the program that has been successfully build in some other system having all the dependencies installed.What I was telling is that the binary will run in a machine even if the development symlink(libcurl.so) is not there.
Sure, it's already gone past the link step and is linked to say libcurl.so.x and whatever other dependencies it may have.

Linux configure/make, --prefix?

Bear with me, this one's not very easy to explain...
I'm trying to configure, make and make install Xfce into my buildroot build directory. When configuring I'm using
--prefix=/home/me/somefolder/mybuild/output/target
so that it builds to the right folder, however when it's compressed and run I get errors from various config files where it's looking for files in
/home/me/somefolder/mybuild/output/target
(which of course doesn't exist.)
How do I set what folder to build into, yet set a different root directory for the config files to use?
Do configure --help and see what other options are available.
It is very common to provide different options to override different locations. By standard, --prefix overrides all of them, so you need to override config location after specifying the prefix. This course of actions usually works for every automake-based project.
The worse case scenario is when you need to modify the configure script, or even worse, generated makefiles and config.h headers. But yeah, for Xfce you can try something like this:
./configure --prefix=/home/me/somefolder/mybuild/output/target --sysconfdir=/etc
I believe that should do it.
In my situation, --prefix= failed to update the path correctly under some warnings or failures. please see the below link for the answer.
https://stackoverflow.com/a/50208379/1283198

What should Linux/Unix 'make install' consist of?

I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)
Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.
In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.
make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install

Resources