Building python 3 c modules in MacOS - setup using odd compiler options - python-3.x

I'm trying to build a c module in MacOS. The build fails during the linker process with the error:
ld: warning: object file (build/temp.macosx-11.1-x86_64-3.7/pparser.o) was built for newer macOS version (11.1) than being linked (11.0)
I realize this is just a warning but the resulting so file fails in import to a python script. I'm doing the build with the following setup.py
from distutils.core import setup, Extension
setup(name='pparser', version='1.0', \
ext_modules=[Extension('pparser',
['pparser.cpp'],
extra_compile_args = ["-Wno-nullability-completeness",
"-Wno-undef-prefix",
"-I/usr/local/opt/flex/include",
"-I/usr/local/opt/bison/include",
"-std=gnu++14"])])
What's odd is that I can capture the clang invocation and run it myself at the command line with the verbose option
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -bundle -undefined dynamic_lookup -L/Users/john/.pyenv/versions/3.7.9/lib -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk -std=gnu++14 -L/usr/local/opt/readline/lib -L/Users/john/.pyenv/versions/3.7.9/lib -L/usr/local/opt/tcl-tk/lib build/temp.macosx-11.1-x86_64-3.7/pparser.o -o build/lib.macosx-11.1-x86_64-3.7/pparser.cpython-37m-darwin.so --verbose
and I see the following:
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
"/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld" -demangle -lto_library /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/libLTO.dylib -dynamic -arch x86_64 -bundle -platform_version macos 11.0.0 11.3 -syslibroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk -undefined dynamic_lookup -undefined dynamic_lookup -o build/lib.macosx-11.1-x86_64-3.7/pparser.cpython-37m-darwin.so -L/Users/john/.pyenv/versions/3.7.9/lib -L/usr/local/opt/readline/lib -L/Users/john/.pyenv/versions/3.7.9/lib -L/usr/local/opt/tcl-tk/lib build/temp.macosx-11.1-x86_64-3.7/pparser.o -lc++ -lSystem /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/lib/darwin/libclang_rt.osx.a
I suspect that the problem is the ld switch -platform-version macOS 11.0.0 11.3 but where does that come from? It's not in the setup file or in the command line that I typed.
My set up is brew python 3.7.9 in a pyenv virtual environment.
Any help on understanding where python setup is pulling command line options from would be quite helpful.

After more digging, I found the answer here:
Customizing the compiler and linker used by setuptools
It seems that CPython creates a sysconfig file containing all the environment variables used to compile it. These are then used by python when a setup file is built.
You can review these variables in python:
import distutils.sysconfig
distutils.sysconfig.get_config_var()
The author goes on to describe how to override any variable desired.

Related

nvm: install fails on BSD while building from source

System Info: FreeBSD 11.3-RELEASE-p3, amd64
I have tried using nvm to install node v12.16.2, v10.20.1, and v10.15.3, however it fails building from source (no binary available for pull on BSD) with the same error on all three:
/usr/bin/ld:/usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/obj.target/tools/v8_gypfiles/libv8_libbase.a: file format not recognized; treating as linker script
/usr/bin/ld:/usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/obj.target/tools/v8_gypfiles/libv8_libbase.a:1: syntax error
Configure completes successfully:
$>./configure --prefix=/home/ifiht/.nvm/versions/node/v12.16.2 <
INFO: configure completed successfully
gmake -C out BUILDTYPE=Release V=0
and the last command before failure is:
/usr/bin/clang++ -o /usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/bytecode_builtins_list_generator -pthread -rdynamic -m64 -Wl,--export-dynamic -Wl,--start-group /usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/obj.target/bytecode_builtins_list_generator/deps/v8/src/builtins/generate-bytecodes-builtins-list.o /usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/obj.target/bytecode_builtins_list_generator/deps/v8/src/interpreter/bytecode-operands.o /usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/obj.target/bytecode_builtins_list_generator/deps/v8/src/interpreter/bytecodes.o /usr/home/ifiht/.nvm/.cache/src/node-v12.16.2/files/out/Release/obj.target/tools/v8_gypfiles/libv8_libbase.a -L/usr/local/lib -lexecinfo -Wl,--end-group
I'm out of troubleshooting ideas, if anyone knows how to enable nvm install verbosity that would also help, not sure why the linker is trying to read random .a (assembly?) files??

installing VIM8.1 on CENTOS7 for user with support to python3.7 fails

I am trying install vim8.1 as a local user on CENTOS7, following the procedure found https://github.com/Valloric/YouCompleteMe/wiki/Building-Vim-from-source [here], but with some changes.
I get into problem with the python
fatal error: Python.h: No such file or directory
#include <Python.h>
After cloning vim I do the following, without error:
./configure --with-features=huge --enable-multibyte --enable-rubyinterp=yes --enable-python3interp=yes --with-python3-config-dir=/usr/local/lib/python3.7/config-3.7m-x86_64-linux-gnu --enable-perlinterp=yes --enable-luainterp=yes --enable-gui=gtk2 --enable-cscope --prefix=/home/myuser
I am setting python3.7 (which is the python version I am using) as "pypython3-config-dir", but it seems to find python3.4:
cc -std=gnu99 -c -I. -I/home/myuser/env/env3/include/python3.4m -pthread -fPIE -Iproto -DHAVE_CONFIG_H -g -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -o objects/if_python3.o if_python3.c
This is strange to me, but more problematic is that it cannot find the Python.h. This happens also when I add /usr/include/python3.4m when the file Python.h is located to the PATH:
PATH=/usr/include/python3.4m:$PATH
echo $PATH shows that it is there.
Can anyone help me with this? I imagine that keeping version 3.4 in the installation of vim is not a problem for me.
Well, I happened to need vim in same configuration as yours.
I started with an official CentOS 7 docker image, and in some days I put this vim installation script:
https://gist.github.com/niloct/af20c98e983c60cdd26eaa4745d3e99e
What happens is that I decided to compile Python 3.7.3 from sources, since no packages were available for the OS version (I searched here: https://pkgs.org/), and then managed to configure vim compilation to work with it.
From everything that's set there, the python 2.7 config dir setting (--with-python-config-dir=/usr/lib64/python2.7/config/) may be different to yours, tweak it and hopefully you can compile everything with this.

clang++ as drop-in g++ replacement

I'm trying to use clang++ as drop-in replacement for G++. I'm compiling for AArch64, but for linking, clang seems to invoke the native (x86) /usr/bin/ld instead of that from AArch64 GCC suite. The clang command line looks like:
clang++ -target aarch64-linux-gnu -v \
-gcc-toolchain /path/to/aarch64/gcc \
--sysroot=/path/to/aarch64/gcc/aarch64-linux-gnu/libc \
<some other options> <obj files>
And from the verbose output, I get:
Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4)
Target: aarch64--linux-gnu
Thread model: posix
Found candidate GCC installation: /path/to/aarch64/gcc/lib/gcc/aarch64-linux-gnu/4.9.3
Selected GCC installation: /path/to/aarch64/gcc/lib/gcc/aarch64-linux-gnu/4.9.3
"/usr/bin/ld" --sysroot=/path/to/aarch64/gcc/aarch64-linux-gnu/libc ...
I don't get why clang got around choosing the native linker. The link fails for obvious reasons that object files are AArch64 ELF. Compilation lines similar to the above, but they go OK.
Any thoughts?
PS: I'm a novice clang user
I managed to find a solution: GCC accepts -B option to point to the search path where it'd try to locate the utilities. It turns out--although not documented--that clang too accepts this option. For me, having -B point to AArch64 binutils solved the problem. Another suggestion was to add the AArch64 binutils in $PATH.

GCC keeps saying -mfpu=neon is an unrecognoized command

I am compiling code to run on an arm neon and the make files have the following command line included.
-mcpu=cortex-a9 -march=armv7 -mfpu=neon -DARM_NEON
The details of GCC version are as follows:
gcc (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4
However when I try to compile, gcc keeps throwing the following error:
gcc: warning: '-mcpu=' is deprecated; use '-mtune=' or '-march=' instead
gcc: error: unrecognized command line option '-mfpu=neon'
I am pretty sure that the code could be compiled previously. Though a long time ago. Could it be changes in version of GCC? or is it do with 32 bit and 64 bit compilers?
I was trying to cross compile for an arm processor on my intel x86_64 Ubuntu machine. I needed to add the configuration for the host in the makefiles and use arm-linux-gnueabihf-gcc instead of gcc.

configure test with static lib

I am trying to cross compile libpng for RaspberryPi on Ubuntu 14.04 (x_64) with zlib
but configure fails with
configure:11400: arm-linux-gnueabihf-gcc -o conftest -g -O2 -I/home/user/RPI_DEV/lib/include conftest.c -lz -lm >&5
/home/user/RPI_DEV/xtools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/../lib/gcc/arm-linux-gnueabihf/4.8.3/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
configure:11400: $? = 1
configure: failed program was:
....
Because I am using toolchain for arm, arm-ld cant find zlib.
Is there any option for configure not to compile with shared lib but to try with static lib (eg. -static -lz).
Command is
./configure --enable-static=true --enable-shared=false --with-zlib-include="/home/user/RPI_DEV/lib/include" --with-zlib-lib="/home/user/RPI_DEV/lib/lib" LDFLGS="-L/home/user/RPI_DEV/lib/lib" CPPFLAGS="-I/home/user/RPI_DEV/lib/include" -enable-static --host=arm-linux-gnueabihf --prefix=/home/user/RPI_DEV/lib --exec-prefix=/home/user/RPI_DEV/lib
You need to cross build and install zlib into your toolchain before trying to use it in another project.
What you are doing might work but only if you spell LDFLAGS correctly:
LDFLGS="-L/home/user/RPI_DEV/lib/lib"
Note the missing 'A'. I don't know why your second attempt worked, given you had the same misspelling; possibly you had a correct LDFLAGS in your environment?
Anyway there should be a Ubuntu cross-development guide somewhere that explains how to do this. It's slightly off topic but for Gentoo you use 'crossdev' to install the toolchain then a crossdev specific version of the normal package installation mechanism ([host]-emerge) to install zlib into the toolchain.
Also, the arguments --with-zlib-include and --with-zlib-lib are not supported by any current version of libpng I can find. If you are cross-compiling libpng for an RPi (or, indeed, any ARM system) you should be using the latest version of 1.6 that you can find.
Unless someone solves this the RIGHT way, this is hack I've done.
Open configure.ac file
Find and comment out line
AC_CHECK_LIB(z, zlibVersion, , AC_ERROR([zlib not installed]))
Configure will pass wihout check for zlib and then add zlib by hand
LDFLGS="-L/home/user/RPI_DEV/lib/lib -L/home/user/RPI_DEV/lib/lib/libz.a"
Run autoconf
Run ./configure ...

Resources