qtwebengine: undefined reference to std::basic_streambuf - linux

I am compiling qtwebengine 5.15.2 using Yocto on Ubuntu 18.04.
I am getting the below error:
[18991/20786] STAMP v8_snapshot/obj/v8/run_gen-regexp-special-case.stamp
[18992/20786] LINK v8_snapshot/torque
FAILED: v8_snapshot/torque
/home/aws-mjamal/test/build-am437x-evm-test/tmp/hosttools/g++ -pie -Wl,--fatal-warnings -Wl,--build-id=sha1 -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,-z,defs -Wl,--as-needed -m32 -pie -Wl,--disable-new-dtags -Wl,-O2 -Wl,--gc-sections -o "v8_snapshot/torque" -Wl,--start-group #"v8_snapshot/torque.rsp" -Wl,--end-group -ldl -lpthread -lrt
v8_snapshot/obj/v8/torque_base/torque_base_jumbo_3.o:(.data.rel.ro._ZTVN2v88internal6torque13NullStreambufE[_ZTVN2v88internal6torque13NullStreambufE]+0x18): undefined reference to `std::basic_streambuf<char, std::char_traits<char> >::seekoff(long, std::_Ios_Seekdir, std::_Ios_Openmode)'
collect2: error: ld returned 1 exit status
[18993/20786] CXX v8_snapshot/obj/v8/third_party/inspector_protocol/crdtp/crdtp_jumbo_1.o
[18994/20786] CXX v8_snapshot/obj/v8/src/inspector/inspector/inspector_jumbo_1.o
[18995/20786] CXX v8_snapshot/obj/v8/src/inspector/inspector/inspector_jumbo_3.o
[18996/20786] CXX v8_snapshot/obj/v8/src/inspector/inspector/inspector_jumbo_2.o
[18997/20786] CXX v8_snapshot/obj/v8/src/inspector/inspector/inspector_jumbo_4.o
ninja: build stopped: subcommand failed.

Well, to start with, seekoff is a virtual protected method.
https://en.cppreference.com/w/cpp/io/basic_streambuf
https://en.cppreference.com/w/cpp/io/basic_streambuf/pubseekoff
Calls seekoff(off, dir, which) of the most derived class
The base class version of this function has no effect. The derived
classes may override this function to allow relative positioning of
the position indicator.
So, your chosen configure answers opted to not build the source file where seekoff is actually implemented for that object.
Qt supports X11 with Ubuntu 18.04.
https://doc.qt.io/qt-5/linux.html
Others have had issues trying to build qtwebengine on Ubuntu 18.04 due to a very old libopus being native on that platform.
https://askubuntu.com/questions/1355519/opus-1-3-1-on-ubuntu-18-04-w-preinstalled-opus-0-5-2
You didn't list your brand of board, but here are the instructions from one vendor.
https://developer.toradex.com/knowledge-base/how-to-set-up-qt-creator-to-cross-compile-for-embedded-linux#boot-to-qt-for-embedded-linux
Is this the only error you got?
Web engine is a beast. Everybody deletes it from embedded systems projects because of this. The error you are focusing on might have nothing to do with the real problem. Embedded systems builds of Qt notoriously run out of memory building WebEngine, especially if a person doesn't use -j1 to limit to a single processor. It can consume 8GB per process. If you didn't limit the number of jobs to 1 and have a 4-core machine then it tried to use 32GB and you probably didn't have that. Somewhere further back in the log you got an out of memory error and it didn't fatal out right there.
Do you actually need qtwebengine or are you just building it to build everything?
Almost every embedded system does what this one did.
https://community.toradex.com/t/apalis-imx8qm-b2qt-qtwebengine-build-error/11441/3
It's a resource pig. Lots of embedded systems don't have enough under the hood to even think about running it.
So, one of the following should work for you:
If you don't need webengine start a fresh build in a clean directory tree and remove it from the configuration.
If you do really need it start a fresh build in a clean directory tree and use -j1 on the make/ninja command to limit to one CPU/thread so you don't exhaust memory.
Odds are you won't be lucky enough to have torque_base_jumbo_3 directly use seekoff(). Qt likes to bury stuff like that in hidden private classes. You will have to identify what objects (if you can) that are derived from or contain something derived from std::basic_streambuf. With templates using templates sometimes that can be impossible. Use grep (or a grep-like tool) to go down the entire source tree identifying which source files made use of seekoff(). That should be a small number. Next, search your build log to see what happened to each of those files. Did they all build clean or are some of them not getting built at all?
https://blog.kitware.com/cmake-building-with-all-your-cores/
ninja does not require a –j flag like GNU make to perform a parallel
build. It defaults to building cores +2 jobs at once
The only "quick fixes" for your problem is to take webengine out if you don't need it and limiting ninja to 1.
Oh! Also check the X11 link I posted for the minimum compiler version for Ubuntu 18.04. You could be getting bit by that.

Related

How to build a .so for export to another machine? [duplicate]

I'm very new to Yesod and I'm having trouble building Yesod statically
so I can deploy to Heroku.
I have changed the default .cabal file to reflect static compilation
if flag(production)
cpp-options: -DPRODUCTION
ghc-options: -Wall -threaded -O2 -static -optl-static
else
ghc-options: -Wall -threaded -O0
And it no longer builds. I get a whole bunch of warnings and then a
slew of undefined references like this:
Linking dist/build/personal-website/personal-website ...
/usr/lib/ghc-7.0.3/libHSrts_thr.a(Linker.thr_o): In function
`internal_dlopen':
Linker.c:(.text+0x407): warning: Using 'dlopen' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/unix-2.4.2.0/libHSunix-2.4.2.0.a(HsUnix.o): In
function `__hsunix_getpwent':
HsUnix.c:(.text+0xa1): warning: Using 'getpwent' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/unix-2.4.2.0/libHSunix-2.4.2.0.a(HsUnix.o): In
function `__hsunix_getpwnam_r':
HsUnix.c:(.text+0xb1): warning: Using 'getpwnam_r' in statically
linked applications requires at runtime the shared libraries from the
glibc version used for linking
/usr/lib/libpq.a(thread.o): In function `pqGetpwuid':
(.text+0x15): warning: Using 'getpwuid_r' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/libpq.a(ip.o): In function `pg_getaddrinfo_all':
(.text+0x31): warning: Using 'getaddrinfo' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__63.o): In function `sD3z_info':
(.text+0xe4): warning: Using 'gethostbyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__164.o): In function `sFKc_info':
(.text+0x12d): warning: Using 'getprotobyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/ghc-7.0.3/site-local/network-2.3.0.2/
libHSnetwork-2.3.0.2.a(BSD__155.o): In function `sFDs_info':
(.text+0x4c): warning: Using 'getservbyname' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking
/usr/lib/libpq.a(fe-misc.o): In function `pqSocketCheck':
(.text+0xa2d): undefined reference to `SSL_pending'
/usr/lib/libpq.a(fe-secure.o): In function `SSLerrmessage':
(.text+0x31): undefined reference to `ERR_get_error'
/usr/lib/libpq.a(fe-secure.o): In function `SSLerrmessage':
(.text+0x41): undefined reference to `ERR_reason_error_string'
/usr/lib/libpq.a(fe-secure.o): In function `initialize_SSL':
(.text+0x2f8): undefined reference to `SSL_check_private_key'
/usr/lib/libpq.a(fe-secure.o): In function `initialize_SSL':
(.text+0x3c0): undefined reference to `SSL_CTX_load_verify_locations'
(... snip ...)
If I just compile with just -static and without -optl-static
everything builds fine but the application crashes when it tries to
start on Heroku.
2011-12-28T01:20:51+00:00 heroku[web.1]: Starting process with command
`./dist/build/personal-website/personal-website -p 41083`
2011-12-28T01:20:51+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: error while loading shared libraries: libgmp.so.10:
cannot open shared object file: No such file or directory
2011-12-28T01:20:52+00:00 heroku[web.1]: State changed from starting
to crashed
I tried adding libgmp.so.10 to the LD_LIBRARY_PATH as suggested in here
and then got the following error:
2011-12-28T01:31:23+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: /lib/libc.so.6: version `GLIBC_2.14' not found
(required by ./dist/build/personal-website/personal-website)
2011-12-28T01:31:23+00:00 app[web.1]: ./dist/build/personal-website/
personal-website: /lib/libc.so.6: version `GLIBC_2.14' not found
(required by /app/dist/build/personal-website/libgmp.so.10)
2011-12-28T01:31:25+00:00 heroku[web.1]: State changed from starting
to crashed
2011-12-28T01:31:25+00:00 heroku[web.1]: Process exited
It seems that the version of libc that I'm compiling against is
different. I tried also adding libc to the batch of libraries the
same way I did for libgmp but this results in a segmentation fault
when the application starts on the Heroku side.
Everything works fine on my PC. I'm running 64bit archlinux with ghc
7.0.3. The blog post on the official Yesod blog looked pretty easy
but I'm stumped at this point. Anyone have any ideas? If there's a way to get this thing working without building statically I'm open to that too.
EDIT
Per Employed Russians answer I did the following to fix this.
First created a new directory lib under the project directory and copied the missing shared libraries into it. You can get this information by running ldd path/to/executable and heroku run ldd path/to/executable and comparing the output.
I then did heroku config:add LD_LIBRARY_PATH=./lib so when the application is started the dynamic linker will look for libraries in the new lib directory.
Finally I created an ubuntu 11.10 virtual machine and built and deployed to Heroku from there, this has an old enough glibc that it works on the Heroku host.
Edit:
I've since written a tutorial on the Yesod wiki
I have no idea what Yesod is, but I know exactly what each of your other errors means.
First, you should not try to link statically. The warning you get is exactly right: if you link statically, and use one of the routines for which you are getting the warning, then you must arrange to run on a system with exactly the same version of libc.so.6 as the one you used at build time.
Contrary to popular belief, static linking produces less, not more, portable executables on Linux.
Your other (static) link errors are caused by missing libopenssl.a at link time.
But let's assume that you are going to go the "sane" route, and use dynamic linking.
For dynamic linking, Linux (and most other UNIXes) support backward compatibility: an old binary continues to work on newer systems. But they don't support forward compatibility (a binary built on a newer system will generally not run on an older one).
But that's what you are trying to do: you built on a system with glibc-2.14 (or newer), and you are running on a system with glibc-2.13 (or older).
The other thing you need to know is that glibc is composed of some 200+ binaries that must all match exactly. Two key binaries are /lib/ld-linux.so and /lib/libc.so.6 (but there are many more: libpthread.so.0, libnsl.so.1, etc. etc). If some of these binaries came from different versions of glibc, you usually get a crash. And that is exactly what you got, when you tried to place your glibc-2.14 libc.so.6 on the LD_LIBRARY_PATH -- it no longer matches the system /lib/ld-linux.
So what are the solutions? There are several possibilities (in increasing difficulty):
You could copy ld-2.14.so (the target of /lib/ld-linux symlink) to the target system, and invoke it explicitly:
/path/to/ld-2.14.so --library-path <whatever> /path/to/your/executable
This generally works, but can confuse an application that looks at argv[0], and breaks for applications that re-exec themselves.
You could build on an older system.
You could use appgcc (this option has disappeared, see this for description of what it used to be).
You could set up a chroot environment matching the target system, and build inside that chroot.
You could build yourself a Linux-to-olderLinux crosscompiler
You have several issues.
You should not build production binaries on bleeding edge distributions. The libraries on the production system will not be forward compatible.
You should not link glibc statically - it will always at runtime try to load additional libraries. For example cpu-based assembly. That is what your first warnings are about.
The last linker errors look like they are related to a missing openssl library on the command line.
But all in all - downgrade your distribution.
I had similar problems launching to Heroku (which uses glibc-2.11) where I had an application that required glibc-2.14, but I did not have access to the source and could not re-build it. I tried many things and nothing worked.
My workaround was to launch the service on Amazon Elastic Beanstalk and just provide an API interface.
I found the information provided useful as well, I think the various descriptions miss a critical issue I also ran into while forcing an updated version of Vagrant to start working again.
It's the dependency references internal to something like complicated installs, like Yesod to Heroku. Those interanl refences need to be preserved.
This is the script I wrote to make problems go away (at least, hopefully, for a little while):
#!/bin/bash
cd $HOME/
GLIBC_VERSION="2.17"
GLIBC_PREFIX="/usr/glibc/"
VAGRANT_VERSION="2.2.19"
# Install the basic build system utilities.
yum groupinstall -y "Development tools"
yum install -y curl patchelf
# Grab the tarball with the GNU libc source code.
curl -Lfo glibc-${GLIBC_VERSION}.tar.gz "https://ftp.gnu.org/gnu/glibc/glibc-${GLIBC_VERSION}.tar.gz"
echo "a3b2086d5414e602b4b3d5a8792213feb3be664ffc1efe783a829818d3fca37a glibc-${GLIBC_VERSION}.tar.gz" | sha256sum -c || exit 1
# Extract the secrets and get ready to rumble.
tar xzvf glibc-${GLIBC_VERSION}.tar.gz
# The configure script requrires an independent build directory.
mkdir -p glibc-build && cd glibc-build
# Configure glibc with a GLIBC_PREFIX so it doesn't conflict with distro libc files..
../glibc-${GLIBC_VERSION}/configure --prefix="${GLIBC_PREFIX}" --libdir="${GLIBC_PREFIX}/lib" \
--libexecdir="${GLIBC_PREFIX}/lib" --enable-multi-arch
# Compile and then install GNU libc.
make -j8 && make install
# Download and install Vagrant.
curl -Lfo vagrant_${VAGRANT_VERSION}_x86_64.rpm "https://releases.hashicorp.com/vagrant/${VAGRANT_VERSION}/vagrant_${VAGRANT_VERSION}_x86_64.rpm"
echo "990e8d2159032915f21c0f1ccdcbca1a394f7937e06e43dc1dabe605d208dc20 vagrant_${VAGRANT_VERSION}_x86_64.rpm" | sha256sum -c || exit 1
yum install -y vagrant_${VAGRANT_VERSION}_x86_64.rpm
# Patch the binaries and shared libraries inside the Vagrant directory, so they use the new version of GNU libc.
(find /opt/vagrant/ -type f -exec file {} \; )| grep "dynamically linked" | awk -F':' '{print $1}' | while read FILE ; do
patchelf --set-rpath /opt/vagrant/embedded/lib:/opt/vagrant/embedded/lib64:/usr/glibc/lib:/usr/lib64:/lib64:/lib --set-interpreter /usr/glibc/lib/ld-linux-x86-64.so.2 "${FILE}"
done
The script should be pretty easy to understand, and adapt easily to whatever MacGuffin you want to make work, provied you understand it.
The only tricky part is the rpath you pass to patchelf. Upi need to make sure you preserve the search paths, and precedence your software requires. Or you end up fixing one problem only to create another equally frustrating roadblock.
P.S. Don't forget the update the hashes for any file you down. In particular, you need to compile/install a different version of GNU libc, you will need to update that hash to match the version you want to use.

How to build a gcc compiler on Linux that builds both 32-bit and 64-bit code

I followed the directions in the following URL to build a gcc compiler for Linux:
https://solarianprogrammer.com/2016/10/07/building-gcc-ubuntu-linux/
The resulting compiler builds 64-bit code with no problems.
However, when I try to build 32-bit code (by specifying the -m32 compiler option), I get errors.
Here are the errors that I get:
cannot find -lstdc++
cannot find -lgcc_s
skipping incompatible libgcc.a when searching foor -lgcc
cannot find -lgcc
Obviously, when I built the compiler, I did something wrong - can anyone tell me what I did wrong and how I can rebuild the compiler to build both 32-bit and 64-bit code.
You at least need to configure with --with-multilib-list=m32,m64 on the configure command line.1 You definitely need to not configure with --disable-multilib. You may also need to build&install additional versions of other libraries.
In general, searching the documentation for 'multilib' will show you all the places where it talks about building or using gcc with multiple target ABIs.
1This is the default on at least some versions of gcc. You could also add mx32 if you want to experiment with that.

Linker Issues with boost::thread under linux using Eclipse and CMake

I'm in the process of attempting to port some code across from PC to Ubuntu, and am having some issues due to limited experience developing under linux.
We use CMake to generate all our build stuff. Under windows I'm making VS2010 projects, and under Linux I'm making Eclipse projects. I've managed to get my OpenCV stuff ported across successfully, but am having major headaches trying to port my threaded boost apps.
Just so we're clear, the steps I have followed so-far on a clean Ubuntu 12 installation. (I've done 2 clean re-installs to try and fix potential library cock-ups, now I'm just giving up and asking):
Install Eclipse and Eclipse CDT using my package manager
Install CMake and CMake Gui using my package manager
Install libboost-all-dev using my package manager
So-far that's all I've done. I can create the eclipse project using CMake with no errors, so CMake is successfully finding my boost install. When I try and build through eclipse is when I get issues; The app I'm attempting to build uses boost::asio for some UDP I/O and boost::thread to create worker threads for the asio I/O services. I can successfully compile each module, but when I come to link I get spammed with errors such as:
/usr/bin/c++ CMakeFiles/RE05DevelopmentDemo.dir/main.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/RE05FusionListener/RE05FusionListener.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/NewEye/NewEye.cpp.o -o RE05DevelopmentDemo -rdynamic -Wl,-Bstatic -lboost_system-mt -lboost_date_time-mt -lboost_regex-mt -lboost_thread-mt -Wl,-Bdynamic
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `void boost::call_once<void (*)()>(boost::once_flag&, void (*)()) [clone .constprop.98]':
make[2]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo'
(.text+0xc8): undefined reference to `pthread_key_create'
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::interruption_enabled()':
(.text+0x540): undefined reference to `pthread_getspecific'
make[1]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo'
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()':
(.text+0x570): undefined reference to `pthread_getspecific'
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()':
(.text+0x59f): undefined reference to `pthread_getspecific'
Some Gotchas that I have collected from other StackOverflow posts and have already checked:
The boost libs are all present at /usr/lib
I am not getting any compile errors for inability to find the boost headers, so they must be getting found.
I am trying to link statically, but I believe eclipse should be passing the correct arguments to make that happen since my CMakeLists.txt includes SET(Boost_USE_STATIC_LIBS ON)
I'm officially out of ideas here, I have tried doing local builds of boost and a bunch of other stuff with no more success. I even re-installed Ubuntu to ensure I haven't completely fracked the libs directories and links with multiple weird versions or anything else. Any help would be muchly appreciated.
Correct mechanism is to use Threads package:
find_package(Threads)
#...
target_link_libraries(my_app ${CMAKE_THREAD_LIBS_INIT} ...)
See also cmake and libpthread
When you are building your targets, add -lpthread and it will compile.
See this other thread.
OK, so I found the solution.
It was to do with the absence of the -lpthread flag in the link command. In order to get CMake to link appropriately, then the TARGET_LINK_LIBRARIES line needs to be edited. My edit is:
Original:
TARGET_LINK_LIBRARIES( RE05DevelopmentDemo ${Boost_LIBRARIES} )
Modified and working:
IF(WIN32)
TARGET_LINK_LIBRARIES( RE05DevelopmentDemo ${Boost_LIBRARIES} )
ELSE(WIN32)
TARGET_LINK_LIBRARIES( RE05DevelopmentDemo ${Boost_LIBRARIES} pthread )
ENDIF(WIN32)
I'm guessing that I should probably change the ELSE(WIN32) to an elseif for or use the CMake command FindThreads to link in pthread if needed, but I'm not sure how to do that at the moment and have more imporatant things on my plate given the time I've lost. Interestingly enough, I noticed my link command now has two -lpthread flags appended at the end, one after another, but everything is still compiling quite happily.

Statically Linking NCurses Gives Error, for use in BusyBox environment

I wrote a very simple ncurses program to be run in BusyBox environment. However, it seems like that I cannot get my program to compile with everything. I used:
g++ menu.cpp -ohello -lncurses --> Works fine
g++ -static menu.cpp -ohello -lncurses --> Undefined reference to SP (many times)
I found this question but it ignores linking to ncurses. I need a very single executable. My targeted environment is fixed, so I do not concern portability.
You should paste the exact compiler calls and the exact error messages that you are getting.
Do you have a static version of the ncurses library?
More importantly, do you have a static version of the ncurses library compiled for your target environment? For example your target environment may be using ulibc instead of glibc or it could even be a whole different platform (hint: tell us what your target platform is).
Are you certain that you are compiling with the right flags? The compiler flags that you are showing seem more suited to compiling an application for use in the build host environment...

Using software floating point on x86 linux

Is it (easily) possible to use software floating point on i386 linux without incurring the expense of trapping into the kernel on each call? I've tried -msoft-float, but it seems the normal (ubuntu) C libraries don't have a FP library included:
$ gcc -m32 -msoft-float -lm -o test test.c
/tmp/cc8RXn8F.o: In function `main':
test.c:(.text+0x39): undefined reference to `__muldf3'
collect2: ld returned 1 exit status
It is surprising that gcc doesn't support this natively as the code is clearly available in the source within a directory called soft-fp. It's possible to compile that library manually:
$ svn co svn://gcc.gnu.org/svn/gcc/trunk/libgcc/ libgcc
$ cd libgcc/soft-fp/
$ gcc -c -O2 -msoft-float -m32 -I../config/arm/ -I.. *.c
$ ar -crv libsoft-fp.a *.o
There are a few c files which don't compile due to errors but the majority does compile. After copying libsoft-fp.a into the directory with our source files they now compile fine with -msoft-float:
$ gcc -g -m32 -msoft-float test.c -lsoft-fp -L.
A quick inspection using
$ objdump -D --disassembler-options=intel a.out | less
shows that as expected no x87 floating point instructions are called and the code runs considerably slower as well, by a factor of 8 in my example which uses lots of division.
Note: I would've preferred to compile the soft-float library with
$ gcc -c -O2 -msoft-float -m32 -I../config/i386/ -I.. *.c
but that results in loads of error messages like
adddf3.c: In function '__adddf3':
adddf3.c:46: error: unknown register name 'st(1)' in 'asm'
Seems like the i386 version is not well maintained as st(1) points to one of the x87 registers which are obviously not available when using -msoft-float.
Strangely or luckily the arm version compiles fine on an i386 and seems to work just fine.
Unless you want to bootstrap your entire toolchain by hand, you could start with uclibc toolchain (the i386 version, I imagine) -- soft float is (AFAIK) not directly supported for "native" compilation on debian and derivatives, but it can be used via the "embedded" approach of the uclibc toolchain.
GCC does not support this without some extra libraries. From the 386 documentation:
-msoft-float Generate output containing library calls for floating
point. Warning: the requisite
libraries are not part of GCC.
Normally the facilities of the
machine's usual C compiler are used,
but this can't be done directly in
cross-compilation. You must make your
own arrangements to provide suitable
library functions for
cross-compilation.
On machines where a function returns
floating point results in the 80387
register stack, some floating point
opcodes may be emitted even if
-msoft-float is used
Also, you cannot set -mfpmath=unit to "none", it has to be sse, 387 or both.
However, according to this gnu wiki page, there is fp-soft and ieee. There is also SoftFloat.
(For ARM there is -mfloat-abi=softfp, but it does not seem like something similar is available for 386 SX).
It does not seem like tcc supports software floating point numbers either.
Good luck finding a library that works for you.
G'day,
Unless you're targetting a platform that doesn't have inbuilt FP support, I can't think of a reason why you'd want to emulate FP support.
Doesn't your x386 platform have external FPU support? Pity it's not a x486 with the FPU built in!
In my experience, any soft emulation is bound to be much slower than its hardware equivalent.
That's why I finished up writing a package in Ada to taget the onboard 68k FPU instead of using the soft emulation provided by the compiler manufacturer at the time. They finished up bundling it in their compiler as a matter of fact.
Edit: Just seen your comment below. Hmmm, if you don't need a full suite of FP support is it possible to roll your own for the few math functions you do need? That how the Ada package I mentioned got started.
HTH
cheers,

Resources