Should I use libc++ or libstdc++? [closed] - linux

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am developing command line interface executables for both osx and linux using c/c++. The project will link against opencv. Should I use libc++ or libstdc++?

I would use the native library for each OS i.e. libstdc++ on GNU/Linux and libc++ on Mac OS X.
libc++ is not 100% complete on GNU/Linux, and there's no real advantage to using it when libstdc++ is more complete. Also, if you want to link to any other libraries written in C++ they will almost certainly have been built with libstdc++ so you'll need to link with that too to use them.
More info here about the completeness of libc++ on various platforms.

Major Linux distributions do not provide LLVM libc++, because:
Unlike Apple and FreeBSD, the GPL+3 is not an issue, so no need to
implement another stack here.
Linux components have been developed around GNU libstd++ for ages. Some of them
do not build on anything else.
While libc++ is strong in new features, it has some problems with legacy code.
If eventually libc++ became part of distributions, it will be as an optional component. linking against it will probably require extra options.
Like Jonathan said, you should use whatever tool is included by default. Clang is safe in Linux to use since is configured as a GCC replacement, so in that aspect you don't have to worry about 2 compilers. Also since you are targeting two platforms, you should take a look to cmake.

Related

Build system on Linux that doesn't rely on make [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In GNU/Linux the use of GNU make and Makefiles is very common but not entirely satisfying. I am aware of tools like autotools and CMake but ultimately they still generates a Makefile, (in the case of CMake)at least on Linux. It is just automating the process of generating the Makefile.
I am wondering what build systems there are on Linux that do not require one to execute GNU make or even have GNU make installed and what advandages/disadvantages they have compared to GNU make.
Similar information related to POSIX make or non-GNU Linux or Unix in general are also welcome. It would also be nice to include historical perspectives.
I don't get your point about cmake. There is ninja, it is commonly used with cmake. CMake has multiple generators, make just being the most commonly used. More about it maybe in cmake-generators.
There is even wiki List_of_build_automation_software with list of Make-incompatible build systems, most of them work under Linux. I've seen projects with:
Maven
Ant
waf
SCons
QT Build System
Rake
Ninja
Bazel

Difference between arm-eabi arm-gnueabi and gnueabi-hf compilers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What is the difference between arm-eabi, gnueabi and gnueabi-hf cross compilers?
I am kind of finding it difficult to choose the executable that is correct for my target platform.
Is there a native compiler for arm?
I'm not completely sure:
the eabi stands for the compilation of code which will run on bare metal arm core.
the gnueabi stands for the compilation of code for linux
For the gnueabi/gnueabi-hf part, I found an answer here.
gcc-arm-linux-gnueabi is the cross-toolchain package for the armel architecture. This toolchain implies the EABI generated by gcc's -mfloat-abi=soft or -mfloat-abi=softfp options.
gcc-arm-linux-gnueabihf is the cross-toolchain package for the armhf architecture. This toolchain implies the EABI generated by the gcc -mfloat-abi=hard option.
'hf' means hard-float which indicates that the compiler and its underlying libraries are using hardware floating point instructions rather than a software implementation of floating point such as fixed point software implementations.
The 'eabi' refers to what the underlying binary is going to look like.
It can be argued that these can be achieved with flags to gcc but the issue is that of bare metal pre-compiled libraries. Unless you are recompiling everything from source, it may not be feasible to use gcc with flags alone. Even in that case you might have to carefully configure each package or library with the appropriate compile options.

Which UUID library to use [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm writing a GUI application using GNOME technologies, and I need to define UUIDs for resources in RDF files.
I'm writing in C++ but I don't mind using a C library and wrap it in my own C++ wrapper. I also prefer to use existing common libraries than add dependencies on 3rd party libraries.
I found two libraries which seem to be standard, libuuid (which comes with the Linux kernel as part of util-linux) and the OSSP uuid library, which has a C++ binding.
No program on my system uses OSSP uuid library, but my whole desktop depends on the libuuid package, probably because the kernel itself depends on it.
The question is, which one should I use? Is there a difference or I can just choose randomly? I don't know why there are different implementations, but I'd like to choose one and stick with it.
If you are on Linux anyway, probably your best option is using libuuid. I mean, everyone is using it and it's a really nice library.
You'll have to depend on the chosen library and, most likely, libuuid will be already present on your user's system. You noted that no program on your system uses OSSP, the same is true for all my systems. So why bother and use some …let's call it third party library… when you already have a popular library used by everyone else and known to work very well (I don't mean that OSSP works worse, it's also quite good)?
I'm not aware of any reason to prefer OSSP uuid over libuuid.
Well, I should probably note that you can simply read UUIDs from /proc/sys/kernel/random/uuid but that's not as much fun as using a C library, right?.
go for libuuid, it has wider use, and it's easier to have feedback and to find docs in case of problems.

Giving R under Linux access to a DLL [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have an R script which was developed in Windows, and which requires a particular DLL to be in the path because it uses some functions contained therein (via the dyn.load function).
Is it possible to make the script work under Linux? Perhaps using wine?
Assuming you have the source code of the non R code, I think your best bet will be to compile the code under Linux, e.g. Using a gcc compiler, create the shared library (.so file) and load it into R. If you put your code (R code and the other source code) in an R package you could integrate the R code and other source code so that they can be installed in one go, where the source is compiled on the fly.
The fact that you don't have the source code makes things quite a bit more complex. This SO post:
Using Windows DLL from Linux
Suggests to me that what you want is not trivial. One option would be to run the dll in a windows virtual machine. You then communicate using e.g. Tcp/ip to the dll running on your machine. So depending on how far you are willing to go, this might be a solution. The answers to the post above also suggest wine will not provide a satisfactory solution, but the post is quite old so wine might be improved in the meantime.

Run .pkg files in Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is it possible to run .pkg files in Ubuntu or Fedora. If so, how do I start with it?
What's .pkg file? It's a file format used in Mac systems(Apple Inc.)
Is there any alternatives which can run .pkg files in Linux distros? ( specifically Ubuntu or Fedora, I'm using latest version in both distros).
You can unpack the Xar format using the xar archiver; perhaps the ark archive front-end can also handle the Xar format, as it links against libarchive12, which provides read-only support for the Xar format.
OS X uses the Mach-O executable format, while Linux uses ELF. (Okay, Linux can also read some archaic a.out formatted files too, but this format is effectively dead on modern Linux systems.) There is an experimental Mach-O loader for Linux, but it sure sounds like a toy at this point. (You'd also need the libraries that applications use in order to actually run programs -- that'd be another complication.)
So: Yes, you can unpack them. No, you cannot simply run OS X applications on Linux.

Resources