What is the target of sphinxbase? - cmusphinx

I downloaded the AndroidPocketSphinx package along with Pocketsphinx and Sphinxbase and built it per the instructions here.
It builds and runs fine, but now I am trying to understand how all the components fit together to come with this impressive system.
I examined the Sphinxbase directory tree and could not find any binary target of which Pocketsphinx and/or AndroidPocketSphinx could reference. That is, I was expecting a .so, .a, .dll, bin or something like that, but all I could find were source files in various languages.
I am wondering, how can the system build and run so wonderfully if the very base package doesn't produce any library or similar binary target?
What am I missing?

Related

How to pack files into one executable file for Linux and Windows?

I'm creating an desktop app on Golang with Muon UI (using Ultralight instead of Chromium) and cross-build my app for Linux and Windows. For now the app work fine but it required Ultralight libraries (*.dll for Windows and *.so for Linux). But I wanna distribution my app as single executable file. How I can create two executable files? First file for Linux, it's should include main executable file for Linux and only *.so libraries. And second file should include main executable file for Windows and only *.dll libraries. How I can to do this?
Are there any CLI utils for this? (for using in gitlab CI inside Docker for example) Or maybe I can to do this via Golang (for example using embed package. Can I embedded libraries into exe file, that it is can run)?
Or can I use cgo for link dynamic libs as static into binary file?
The honest answer would be: "With great difficulty, lots of pain, blood and tears."
The somewhat longer answer is, that a precompiled DLL/.so may contain slightly more than a mere static library. It it possible to "convert" a DLL/.so into a static library? Somewhat. It boils down to dumping its contents into object files, reverting all the relocation entries, possibly dealing with versioned symbols and weak symbols. No, there are no kitchen sink utilities out there, doing all that for you on an executable binary level.
If you can limit yourself to Linux, you may want to look into Flatpak. What this does is wrapping everything up into a sort of "self extracting archive", which upon launch will transparently and invisibly unpack itself into an in-situ temporary mount point (which you won't see from the rest of the system).
Now, one option would be to build all the dependencies of your program yourself, and arranging for those builds to be created as static libraries. In that case you're no longer dealing with DLLs. However some libraries do not want to be built for static linking, so your mileage may vary there.
Truth to be told: Why is distributing multiple files any issue at all? On Linux/*BSD you must ship separate icon and .desktop files anyway, so that stuff shows up in the Desktop application menus. Yes, it'd be nice if instead of dealing with XDG desktop entry files we had the option to place all of that information into a special – let's call it .xdgdata – readonly section, with some well known symbol names, so that we could have truly single file distributable executables.
My honest suggestion: Don't sweat about it. Just ship the whole bunch of files and don't worry too much about "how this looks".

Location of libtensorflow.so and headers after building tensorflow r1.12 with bazel on Linux

after having a lot of troubles building earlier versions of tensorflow using cmake I decided to give bazel a go since it supposedly is able to create a shared library. As per official recommendation I downloaded and built bazel 0.15 and then used
bazel build //tensorflow:libtensorflow.so
in the hopes of being able to build a shared library. After almost two hours bazel claimed that it was able to build libtensorflow.so, however, I cannot find it anywhere. It is especially strange since the whole directory is only about 650MB large. Earlier I built tensorflow r1.10 using cmake which generated a libtensorflow.so (which does not work in my test project due to other reasons) and that alone was over 800 MB large; the whole cmake directory was over 11GB in size.
Furthermore my test project (that actually works under Windows with an earlier version of tensorflow) requires some headers like
tensorflow/core/protobuf/meta_graph.pb.h
but it seems that this file hasn't been generated either because I cannot find it.
Can someone please tell me the correct way of getting a shared library and the necessary headers or where I find them after the supposed successful bazel build.
Cheers
Alright, so I now find out that the command find doesn't look in symlinks and so I was able to find libtensorflow.so (albeit a much smaller one with a size of about 100MB) and some headers in one of the symlinked directories that are created by bazel in your working path, i.e. bazel-bin, bazel-out etc.
Howevere, I am now stuck with another problem. As I mentioned above there were some headers but not all. For instance I cannot find
google/protobuf/stubs/common.h
Does anyone know how I can get all the rest of the headers like the one mentioned above, Eigen, Tensor and what not. What bazel target do I need to specify or how do I get them otherwise?

Which libraries should go to a pkg-config file as a dependencies?

I'm writing a shared library that itself depends on boost and pcl libraries.
When generating .pc file for my library should I add all these libraries also to the .pc file as dependencies?
It's been a long time since I last time studied these things and I'm a bit confused how this worked again on Linux. When my test app links to my lib I have to add all these pcl and boost libs again to the build even though the lib already has been linked against these libs.
But when I look at the deps of libQtGui.so, for example, it has tens of all kinds of libs it links to, but I don't need to make my app link to those libs...only -lQtGui is enough.
I have just used CMake and link_libraries to add boost and pcl libs.
When generating .pc file for my library should I add all these libraries also to the .pc file as dependencies?
It depends on API of your library:
if public (i.e. installable) headers of your lib use boost/pcl (i.e. have #inclue <boost/...>) (in other words you used PUBLIC (or INTERFACE) named keywords when link your library against boost/pcl in CMake+target_link_libraries) -- then yes you need to add 'em;
otherwise, it depends on what exactly you have at the end -- i.e. does your DSO has DT_NEEDED entries for boost/pcl libs (most likely) or not (you can check it w/ ldd <your-lib>.so). For the last case, you also need to add your dependencies to the *.pc files.
Also, in case of binary dependency from boost/pcl (dunno if the latter has any DSO or not) please make sure you specify exact location of the linked libs -- cuz a user may have multiple (co-existed) boost installations (potentially incompatible) or can do upgrade (later) to other (binary incompatible) version (and you can't really do smth w/ it)… It is important to be linked to the same (or at least binary compatible, which is kinda hard to guarantee for boost) library as you did…
I have just used CMake and link_libraries to add boost and pcl libs.
Please read smth about "Modern CMake" and stop using link_libraries :-) -- use target_link_libraries instead…

Building libharu from scratch

Recently I'm trying to build and use libharu library in order to create PDFs from bitmaps.
I've made some research trough it's site : http://libharu.org/.
There are instructions showing how to build it, but i doesn't build because it has dependencies to two other libraries(which i don't understand how to integrate in the building process) - zlib and libpng.
But i cant understand clearly the entire process so my last hope is if someone has built it from scratch and could explain me or provide me with some details for the building process.
LibHaru was forked after 2.0.8. The later version uses a make system whose code seems to have changed. First of the new variant was 2.10.0. Old version is on sourceforge.
I couldn't get later version to compile but 2.0.8 worked. (dated 2006) In the past I have seen comment suggesting I am not alone. You are correct there are no instructions about the dependencies. If you can you should use the pre-built version, which is mentioned.
From your message I assume you have little software building experience. Outlining in a few words if not feasible, here is a little. Dependent libraries have to be available, either as source for compiling, or occasionally as pre-built libraries specifically for the compiler/OS you are using. You have to go and get them. Then the compiler system you are using to build libharu, has to be able to "see" the dependent libraries, in this case the *.h file. After compiling the whole lot has to be linked together. None of this is rocket science but is a major source of frustration, everything has to be just right, usually with nothing to tell you what is wrong.
And that is why some people favor using a third party "build" tool. If it works.
libharu has two major dependencies: zlib and libpng, both widely used libraries which usually compile easily but I think there are ways to omit these for a loss of functionality, are about handling import of bitmaps.
So you have three sets of sources and essentially three libraries where as a final step are linked to from the libharu source code.
Alternatively you could find a pre-built version.

Generate package config file automagically using Scons, bjam, and/or cmake

Hey Stackoverflowers: one comment and one question.
Comment: You guys/girls are great, thanks for taking a look.
Question:
Can Bjam, Scons, or Cmake easily install a .pc file for library projects?
I find it really annoying that I have to maintain the same library dependency list in my scons/bjam/make file, the .pc file (for libraries), and rpm/deb package config files.
It would be nice if a build tool could manage the build and installation meta-data.
Thoughts?
Because SCons is such a flexible environment, yes you can in fact use it to manage the entire process from building to deliverable package.
Our build goes through several phases with SCons:
Build - resulting .o, .os, generated files, etc under ./build
Assembly - resulting exe, so/dll, binarys, etc under ./delivery
Packing & configuration - a set of deb/rpm/msi + configuration, etc under ./package
It isn't all out of the box, and requires you to write some python code, find tools etc, but it does work for us pretty well.
Our project is C, C++, Java, & Python building dozens of binary targets for a distributed system with multiple delivery targets for different machine installs on Windows, Ubuntu and Redhat Linux.
Again, be prepared to have to customize your scripts and write custom builders though to wrap different processes.

Resources