How do you go about getting fixes put into linux packages? - linux

I need group4 decode in the Python Imaging Library, but in order to build it, I need to get some changes put into the distros libtiff-dev packages.
Having never done this kind of thing before, I'm curious about where to start. The changes I need in libtiff are the placement of the header files once libtiff is installed. Right now, libtiff drops its header files into /usr/include, but it only drops in
/usr/include/tiffconf.h
/usr/include/tiff.h
/usr/include/tiffio.h
/usr/include/tiffio.hxx
/usr/include/tiffvers.h
I need to add:
/usr/include/tif_config.h
/usr/include/tif_dir.h
/usr/include/tiffiop.h
The patch in PIL I had to use to get all this going is from 2006 and is made against the 1.1.6 PIL library (PIL is now at 1.1.7), but I'm pretty sure I can't get these patches for PIL into pyPI distribution if it won't build in the distros.
So, how do you get changes into the distros. I don't need to change anything in libtiff, just in the way it gets delivered. I need to get those 3 files added to /usr/include
After that's done, I can push to get the fix into PIL.

There are two routes to getting fixes into Linux distributions. If the issue is distribution specific then the best place to start is the bug tracker for that distribution. You mentioned missing files, which is likely to be a distribution issue. (It's not quite clear from what you wrote why those files would be missing everywhere, are you sure they're not deprecated or something?)
Redhat Bugzilla
Debian bug tracker
If it's not distribution specific you could still go via the bug tracker for the distribution you use, but you could also go directly to the original author. Author details are normally available somewhere within each distribution.

Related

Location of libtensorflow.so and headers after building tensorflow r1.12 with bazel on Linux

after having a lot of troubles building earlier versions of tensorflow using cmake I decided to give bazel a go since it supposedly is able to create a shared library. As per official recommendation I downloaded and built bazel 0.15 and then used
bazel build //tensorflow:libtensorflow.so
in the hopes of being able to build a shared library. After almost two hours bazel claimed that it was able to build libtensorflow.so, however, I cannot find it anywhere. It is especially strange since the whole directory is only about 650MB large. Earlier I built tensorflow r1.10 using cmake which generated a libtensorflow.so (which does not work in my test project due to other reasons) and that alone was over 800 MB large; the whole cmake directory was over 11GB in size.
Furthermore my test project (that actually works under Windows with an earlier version of tensorflow) requires some headers like
tensorflow/core/protobuf/meta_graph.pb.h
but it seems that this file hasn't been generated either because I cannot find it.
Can someone please tell me the correct way of getting a shared library and the necessary headers or where I find them after the supposed successful bazel build.
Cheers
Alright, so I now find out that the command find doesn't look in symlinks and so I was able to find libtensorflow.so (albeit a much smaller one with a size of about 100MB) and some headers in one of the symlinked directories that are created by bazel in your working path, i.e. bazel-bin, bazel-out etc.
Howevere, I am now stuck with another problem. As I mentioned above there were some headers but not all. For instance I cannot find
google/protobuf/stubs/common.h
Does anyone know how I can get all the rest of the headers like the one mentioned above, Eigen, Tensor and what not. What bazel target do I need to specify or how do I get them otherwise?

Building libharu from scratch

Recently I'm trying to build and use libharu library in order to create PDFs from bitmaps.
I've made some research trough it's site : http://libharu.org/.
There are instructions showing how to build it, but i doesn't build because it has dependencies to two other libraries(which i don't understand how to integrate in the building process) - zlib and libpng.
But i cant understand clearly the entire process so my last hope is if someone has built it from scratch and could explain me or provide me with some details for the building process.
LibHaru was forked after 2.0.8. The later version uses a make system whose code seems to have changed. First of the new variant was 2.10.0. Old version is on sourceforge.
I couldn't get later version to compile but 2.0.8 worked. (dated 2006) In the past I have seen comment suggesting I am not alone. You are correct there are no instructions about the dependencies. If you can you should use the pre-built version, which is mentioned.
From your message I assume you have little software building experience. Outlining in a few words if not feasible, here is a little. Dependent libraries have to be available, either as source for compiling, or occasionally as pre-built libraries specifically for the compiler/OS you are using. You have to go and get them. Then the compiler system you are using to build libharu, has to be able to "see" the dependent libraries, in this case the *.h file. After compiling the whole lot has to be linked together. None of this is rocket science but is a major source of frustration, everything has to be just right, usually with nothing to tell you what is wrong.
And that is why some people favor using a third party "build" tool. If it works.
libharu has two major dependencies: zlib and libpng, both widely used libraries which usually compile easily but I think there are ways to omit these for a loss of functionality, are about handling import of bitmaps.
So you have three sets of sources and essentially three libraries where as a final step are linked to from the libharu source code.
Alternatively you could find a pre-built version.

Is it possible to install more than one ghc and change each installation's binary name?

Suppose I want to use different versions of GHC, each of them with a different binary name.
Question 1. Can I use ./configure --prefix=ghc-some-version-dir for each of the installations and create symbolic links ghc-7.4.1, ghc-7.6.2, ghc-head without problems?
That is, after the installation and creation of binaries from source code. Using virtual environments would still be needed for building projects and its dependencies.
Question 2. What prevents us from uploading ghc to Hackage with a package name ghc-version having a binary name that depends on its version? e.g. one could cabal install ghc-version-7.6.2 and get a binary ghc-7.6.2 in ~/.cabal/bin
You don't need to do anything special. GHC already installs all of its executables with versioned names and links from the non-versioned name to the most recently installed version, e.g. a link from "ghc" to "ghc-7.6.1" or whatever you installed last. When you build from the repository, the version number is quite long and includes the date you built it.
I don't know for sure why GHC isn't on Hackage, but I presume it's because the build system is very complicated, and that cabal-izing it (and maintaining the cabalization) would be more work than it's worth.
There are several soluttions
Just use chroot
Use a package manager that handles multiple versions of the same library/software such as nix
There are scripts which have been written to handle this such as https://github.com/spl/multi-ghc
Use gnu stow as described in Brent Yorgey blog post.
Ben Millwood has a solution where he just uses the -w flag, read his comment at:https://plus.google.com/u/0/100165496075034135269/posts/VU9FupRvRbU

How can one make a private copy of Hackage

I'd like to snapshot the global Hackage database into a frozen, smaller one for my company's deploys. How can one most easily copy out some segment of Hackage onto a private server?
Here's one script that does it in just about the simplest way possible: https://github.com/jamwt/mirror-hackage
You can also use the MirrorClient directly from the hackage2 repo: http://code.haskell.org/hackage-server/
This is not an answer two the question in the title but an answer to my interpretation of what the OP wish to achieve.
Depending of what you want for level of stability in your production circle you can approach the problem in several ways.
I have split the dependencies in two parts, things that I can use that are in the haskell platform (keep every platform used in production) and then only use a small number of packages outside that and don't let anyone (including yourself) add more packages into your dependency tree just because of laziness (as developer). These extra packages you use some kind of script for and collect from hackage (lock to version) by using cabal fetch. Keep them safe. Create a install script that uses your safe packages and if a new machine (developer) are added to your team, use that script.
yackage is great but it all comes down to how you ship your product. If you have older versions in production you need to have a yackage setup for every version and that could be quiet annoying after a couple of years.
You can download Hackage with Voker57's hackage-mirror.sh. You'll need 'curl' for it to run. If you're using a Debian based Linux distribution, you can install curl by typing apt-get install curl.
Though it's not a segment of Hackage, I've written a bash script, that downloads the whole Hackage, what can be further easily set up as a mirror using an HTTP server. Also, it downloads all required stuff like GHC compilers ready to be used with Stack.
Currently, a complete Hackage mirror occupies ~10GiB (~100000 packages of all versions) and Stack related stuff like GHC compilers ~21GiB (~200 files). Consequent runs of the script skip already downloaded stuff, so it downloads only new one. So it's a pretty convenient way to "live offline" and sync up to date when online.

Distributing a program in linux without the source

I want to be able to distribute a program in Linux without distributing the source with it. The current solution is distributing a tar.gz with a precompiled binary. What is the easiest way to have this binary be placed in the Applications Menu? Is there a way to do this that is common across most linux distributions, but Ubuntu, Fedora, and OpenSUSE would be the priority.
You will want to create a .deb and a .rpm. The former covers Ubuntu (Debian variants), and the latter Red Hat variants. You can also supply a standalone executable for other users who can deal with things like menus themselves.
You will have to deal with Gnome and KDE menu management, and also different distributions lay out their menus differently. There is also the issue of netbook variants such as Moblin, that have a netbook interface that probably has its own "add application" mechanism. I don't know if it is possible for a single .deb to handle both Gnome and KDE menus systems (for Ubuntu and Kubuntu respectively) but I imagine the capability is there to reduce duplication of effort for Ubuntu.
All recent distributions should have xdg-utils installed, which provides scripts such as
xdg-desktop-icon
xdg-desktop-menu
which seem to be what you're looking for.
Haven't looked into it lately...but back in the day (which really wasn't all that long ago) when I was using Linux, RPM was the easiest way to distribute pre-combiled binaries (most distributations had, and still have, some kind of support for RPM packages).
Here's an old how-to on building an RPM package:
Linux Online - RPM How-To
You could look at BitRock intaller.
Try Autopackage or other solutions posted in another question.
Do tar.gz and then give community rights to redistribute modified packages. They will make RPMs, DEBs and any other packages for their beloved distributions... which will probably fit their distros much better than you could ever make.
There is really too many differences between distributions to make one-size-fits-all package, often subtle ones. For example some distributions has "Application" section, other "Applications"... and this made menu items disappear on some distros. Libraries can be different, default settings can be different, and so on...
RPMs and DEBs aren't so portable as it is believed. With one package there might be problems even with different versions of a single distribution, and there is nothing worse than fighting to install badly prepared package correctly.
JeeBee is correct that you would want to go with .deb or .rpm.
For Ubuntu/Debian (the .deb) I would add that you do not send it to people but you create a "repository" and have the users add that url to their /etc/apt/sources.list, then you get a easy way to update the software as well.
That way you solve the distribution and updated problem at the same time.
And here is a example of how this could look like:
http://www.avrfreaks.net/wiki/index.php/Documentation:AVR32_General/Installing_tools_on_Ubuntu_Linux#Ubuntu_8.04_-_Hardy_Heron
And how a repository could look like:
http://www.atmel.no/avr32/ubuntu/
But don't repeat Atmels mistake and only do i386 because there is a lot of other common architectures out there right now, like the amd64.
/Johan
For RPM, this three-part tutorial by IBM is the best beginner's guide to packaging I know:
http://www.ibm.com/developerworks/library/l-rpm1/
http://www.ibm.com/developerworks/library/l-rpm2/
http://www.ibm.com/developerworks/library/l-rpm3.html

Resources