Building libharu from scratch - visual-c++

Recently I'm trying to build and use libharu library in order to create PDFs from bitmaps.
I've made some research trough it's site : http://libharu.org/.
There are instructions showing how to build it, but i doesn't build because it has dependencies to two other libraries(which i don't understand how to integrate in the building process) - zlib and libpng.
But i cant understand clearly the entire process so my last hope is if someone has built it from scratch and could explain me or provide me with some details for the building process.

LibHaru was forked after 2.0.8. The later version uses a make system whose code seems to have changed. First of the new variant was 2.10.0. Old version is on sourceforge.
I couldn't get later version to compile but 2.0.8 worked. (dated 2006) In the past I have seen comment suggesting I am not alone. You are correct there are no instructions about the dependencies. If you can you should use the pre-built version, which is mentioned.
From your message I assume you have little software building experience. Outlining in a few words if not feasible, here is a little. Dependent libraries have to be available, either as source for compiling, or occasionally as pre-built libraries specifically for the compiler/OS you are using. You have to go and get them. Then the compiler system you are using to build libharu, has to be able to "see" the dependent libraries, in this case the *.h file. After compiling the whole lot has to be linked together. None of this is rocket science but is a major source of frustration, everything has to be just right, usually with nothing to tell you what is wrong.
And that is why some people favor using a third party "build" tool. If it works.
libharu has two major dependencies: zlib and libpng, both widely used libraries which usually compile easily but I think there are ways to omit these for a loss of functionality, are about handling import of bitmaps.
So you have three sets of sources and essentially three libraries where as a final step are linked to from the libharu source code.
Alternatively you could find a pre-built version.

Related

GNSS-SDR on Windows?

I know the answer might be negative, but is there any way to run Gnss-Sdr on Windows Instead of Linux/Mac OS?
I Use it on Linux Already But I have just wondered if it can be done.
only related answers please.
It's possible. I'm just doing this. The problem is that some code fragments are written under Linux. The build system and library search methods are also under it. For the first time, I had to cut TCP data transfer and heavily correct some CMake files. I build it with the help MSYS2 under MinGW. The biggest problem is linking files. At this stage, I build most of the individual components. It was also required to manually build all the libraries. With my little experience in porting programs from system to system, it was hard

Location of libtensorflow.so and headers after building tensorflow r1.12 with bazel on Linux

after having a lot of troubles building earlier versions of tensorflow using cmake I decided to give bazel a go since it supposedly is able to create a shared library. As per official recommendation I downloaded and built bazel 0.15 and then used
bazel build //tensorflow:libtensorflow.so
in the hopes of being able to build a shared library. After almost two hours bazel claimed that it was able to build libtensorflow.so, however, I cannot find it anywhere. It is especially strange since the whole directory is only about 650MB large. Earlier I built tensorflow r1.10 using cmake which generated a libtensorflow.so (which does not work in my test project due to other reasons) and that alone was over 800 MB large; the whole cmake directory was over 11GB in size.
Furthermore my test project (that actually works under Windows with an earlier version of tensorflow) requires some headers like
tensorflow/core/protobuf/meta_graph.pb.h
but it seems that this file hasn't been generated either because I cannot find it.
Can someone please tell me the correct way of getting a shared library and the necessary headers or where I find them after the supposed successful bazel build.
Cheers
Alright, so I now find out that the command find doesn't look in symlinks and so I was able to find libtensorflow.so (albeit a much smaller one with a size of about 100MB) and some headers in one of the symlinked directories that are created by bazel in your working path, i.e. bazel-bin, bazel-out etc.
Howevere, I am now stuck with another problem. As I mentioned above there were some headers but not all. For instance I cannot find
google/protobuf/stubs/common.h
Does anyone know how I can get all the rest of the headers like the one mentioned above, Eigen, Tensor and what not. What bazel target do I need to specify or how do I get them otherwise?

Handling autoconf with Android after NDK16

I'm trying to update an existing configuration we have we are cross compiling for a number of targets - the question specifically here is about Android. More specifically we are building code using cmake and the hunter package manager. However we are building ICU using a link that uses autoconf/configure, called from cmake. Not sure that is specifically important except that we have less control on the use of configure than is generally the case.
OK: we have a version that builds against an old NDK but I am updating and have hit a problem identified by https://android.googlesource.com/platform/ndk/+/master/docs/UnifiedHeaders.md: with NDK16 and later, the value of the sysroot parameter needs to vary between compilation and linkage. As it stands the configure script tries to build a small program conftest.c - the program fails to link. Manually I can compile the code in two stages using -c and then linking the subsequent .o, but that is not what configure is trying to do.
Now the reality is that when I build this code, I don't actually need to link the code - I am generating a library which is used elsewhere. However that is not currently the way that configure sees it.
I may look to redo the configuration script to just check that the code can be compiled when cross compiling. However I am curious to know if anybody has managed to handle this sort of thing by keeping the existing config files and just changing the parameters by which the scripts are called.
When r19 releases to stable this problem will go away on its own (https://github.com/android-ndk/ndk/issues/780), but since that's still in beta it's not a good solution just yet.
Prior to r19 (this isn't really unique to r16+, this has always been the case and it was just asymptomatic previously), autoconf builds should be done using a standalone toolchain.
You however should not use a standalone toolchain for CMake, so odds are something about your configuration will need to change until r19 is released. Depending on the effort involved, it may make sense to keep to r15 until r19 is available.

Different versions of compilers + libgcc on windows encountered

I have a third-party library which depends on libgcc_s_sjlj-1.dll.
My own program is compiled under MSYS2 (mingw-w64) and it depends on libgcc_s_dw2-1.dll.
Please note that the third-party library is pure binaries (no source). Please also note that both libgcc_s_sjlj-1.dll and libgcc_s_dw2-1.dll are 32-bit, so I don't think it's an issue related to architecture.
The outcome is apparent, programs compiled based on libgcc_s_dw2-1.dll can't work with third-party libraries based on libgcc_s_sjlj-1.dll. What I get is a missing entrypoint __gxx_personality_sj0.
I can definitely try to adapt my toolchain to align with the third-party's libgcc_s_sjlj-1.dll, but I do not know how much effort I need to go about doing it. I find no such variant of libgcc dll under MSYS2 using this setjmp/longjmp version. I am even afraid that I need to eliminate the entire toolchain because all the binaries I had under MSYS2 sits atop this libgcc_s_dw2-1.dll module.
My goal is straightforward: I would like to find a solution so that my code will sit on top of libgcc_s_sjlj-1.dll instead of libgcc_s_dw2-1.dll. But I don't know if I am asking a stupid question simply because this is just not possible.
The terms dw2 and sjlj refer to two different types of exception handling that GCC can use on Windows. I don't know the details, but I wouldn't try to link binaries using the different types. Since MSYS2 does not provide an sjlj toolchain, you'll have to find one somewhere else. I would recommend downloading one from the "MingW-W64-builds" project, which you can find listed on this page:
https://mingw-w64.org/doku.php/download
You could use MSYS2 as a Bash shell but you can probably not link to any of its libraries in your program; you would need to recompile all libraries yourself (except for this closed source third-party one).

Differences between SCons and Shake

I'm working on a Python/Haskell project and I'm looking for alternatives to Makefile. Obvious choices are Python SCons and Haskell Shake. Since I have no experience with either of them, I'd like to ask if there is any comparison of their drawbacks and advantages.
Update: The project has somewhat complex requirements for building:
Let the user configure the build - like options to enable/disable, paths to tools etc.
There are both Haskell and Python files generated at compile time. Their dependencies should work properly.
There are multiple Haskell programs that share most of the source files. I'd like so that:
it's possible to build each one individually, not building the sources that aren't needed;
source files aren't built multiple times when compiling multiple programs;
yet achieve parallelism during compilation, if possible.
Check for several installed programs on target systems and their paths (like python, flock etc.)
Check for dependencies on target systems, both Python, Haskell.
Parametrize the build according to the dependencies - if the dependencies for testing are missing, it should still be possible to build the project, skipping the tests (and informing the user about it).
There is a Why Shake? document that gives reasons to chose Shake over other build systems, but it does not focus on a comparison to SCons.
Update: All of your requirements seem easy enough to express in Shake (ask on StackOverflow if you get stuck with any of them). As to Shake vs SCons:
Shake is particularly good at dealing with generated files with dependencies that cannot be statically predicted, particularly if you are generating the files from programs you compile.
Building the Haskell parts of your project is likely to be harder than building the Python (since Haskell has a richer structure and more complex compiler). Using Shake makes it easier to tap into existing examples of compiling Haskell and use libraries for parsing Haskell if you need it.
There is a SCons wiki page that compares it to other build tools, unfortunately there is no comparison there with Haskell/Shake.
Also, this question may help.
SCons really shines as compared to other tools (especially make and cmake) by its Python syntax, and its implicit dependency system that is very accurate and easy to use.

Resources