I was installing gperftools:
https://code.google.com/p/gperftools/
Everything worked, and I see that the project links to /usr/local/lib
I'd like to put the library in a folder local to my project, instead.
The reasoning behind this is that I'm putting the project on different machines, and I just need to link against the libprofiler and libtcmalloc libraries, instead of the entire package, that also comes with the pprof and such.
The machines also have different architectures, so I actually need to build into that directory, instead of copy-pasting over
Is this a trivial thing to do?
gperftools uses autoconf/automake, so you can do
./configure --prefix=/path/to/whereever
make
make install
This works for all autotools projects, unless they are severely broken.
On that note, it is generally a good idea to read the INSTALL file in a source tree to find out about this sort of stuff. There is one in the gperftools sources, and this is documented there.
Related
What is the canonical path for a custom library and include files? I thought of either /usr/local/lib + /usr/local/include or ~/lib ~/include. To me the latter looks a better option, since the former are managed by the distribution's package manager and it is best not to interfere.. Though I can not find any reference to people actually using ~/lib.
Thanks
Is this something that you've created yourself, or a third party installation?
Normally /usr/local/ is a good place to install packages not part of the original OS. I do this myself for anything I've built and installed from source. Another place to put things is /opt which is often used by commercial third party software.
If you're going to writing something of your own then using your home directory "~" sounds fine. This is also good if you don't have root access or don't want it to mixed in with the other OS packages.
When compiling and linking you will need to configure things to use those directories. Also if you're using dynamic shared libraries the LD_LIBRARY_PATH must be set as well.
I read a bit about NixOS and tried it these days, because I got the impression that it would let me configure a Linux with just one file.
When I used it, I installed a bunch of packages with nix-env, so they didn't end up in the configuration.nix, but I could simply uninstall them later and add them to the configuration.nix by hand. I there something like npm i -g <package> that would install this globally so it would end up in the configuration.nix and could simply be copied to another machine.
Also, I installed stuff like zsh and atom and they have an entirely different approach to configuration and customization (bashscript, javascript, less, etc).
Is there a way for Nix/NixOS to track the package-specific config too?
Does it already happen and I don't see it? Like the nix expression of the package knows where the package will store its config etc.
I mean, it's nice that I can add these packages to the main config and when using it at another PC I get the same software installed, but I still see myself writing rather much configs for the installed packages too.
If you want packages installed through configuration.nix, then the easiest way to accomplish that is to add them to the environment.systemPackages attribute. Packages listed in there will be available automatically to all users on the machine. As far as I know, there is no shell command available to automate the maintenance of that attribute, though. The only way to manage that list is by editing configuration.nix and manually adding the packages you'd like to have installed.
Nix does not manage package-specific configuration files. As you probably know, NixOS provides such a mechanism for files in /etc, but a similar mechanism to manage config files in $HOME etc. does not exist. The PR https://github.com/NixOS/nixpkgs/pull/9250 on Github contains a concrete proposal to add this capability to Nix, but it hasn't been merged yet because it requires some changes that are controversial.
Nix does not currently offer ways of managing user specific configuration or language specific package managers. AFAICT that's because it is a very complex and opinionated territory compared to generating configs for sshd etc.
There are however Nix-based projects providing solution to at least some parts of your question. For managing user configuration (zsh etc.), have a look at home manager.
I built ZeroMQ and Sodium from source and have them installed properly on my development machine, which is just a Pi2. I have one other machine that I want to make sure these gets installed to properly. Is there a proper way to do this other than just copy .a and .so files around?
So, there are different ways of handling this particular issue.
If you're installing all your built-from-source packages into a dedicated tree (maybe /usr/local, or /opt/mypackages) then simply copying files around is a fine solution, using something like rsync. Particularly since you only have two machines, anything more complicated may not be worth the effort.
If you're trying to install ZeroMQ and Sodium along side system-managed files (in, e.g., /usr/lib and /usr/bin)...don't do that. That is, don't try to mix "things installed by packages" with "things installed from source", because that way lies sadness and doom.
That said, a more manageable way of distributing these files would be to build custom packages and then setting up a local apt repository, so that you can just apt install the packages on your systems. There are various guides out there for doing this if you want to go down this route. It's a good skill to have in general, especially if you ever want to share your tools with someone else (because it makes it easy for them to install any necessary dependencies).
I am getting into a position where I have to use other people code for projects, for example openTLD. I want to change some of the code to give it more functionality and use it in a diffrent way. What I have found is that many people have packaged their files in such a way that you are supposed to use
cmake
and then
make
and sometimes after that
make install
I don't want to install the software on my system. What I am looking to do is get these peoples code to a point where I can add to it in Eclipse or even just using Nano and then compile it.
At what point is the code in a workable/usable state. Can I use it after doing cmake or do I need to also call make? Is my thinking correct that it would be better to edit the code after calling cmake as opposed to before? I am not going to want my finished code to be cross platform supported, it will only be on Linux. Is it easer to learn cmake and edit the code befor running cmake as opposed to not learning cmake and using the code afterwards, if that is possible?
You question is a little open ended.
Looking at the opentld project, there is a binary and a library available for use. If you are interested in using the binary in your code, you need to download the executables(Linux executables are not posted). If you are planning to use the library, you have two options. Either you use the pre-built library or build it during your build process. You would include the header files in your custom application and link with the library.
If you add more details, probably others can pitch in with new answers or refine the older ones.
What is the best practice for deploying dependencies on Linux when shipping an own application?
Some SO posts recommend to include all dependencies in the package (utilizing LD_LIBRARY_PATH), other posts recommend to only ship the binary and use the "dependency" feature of the DEB/RPM packages instead. I tried to use the second approach, but immediately ran into the problem that one dependency (libicu52) doesn't seem to be available in certain Linux distributions yet. For example, in my OpenSuse test installation only "libicu51" is available in the package manager.
I initially thought that the whole idea of the packaging system is to avoid duplicate SO files in the system. But does it really work (see above), or should I rather ship all dependencies with my app, to make sure that it runs on all distributions?
For custom application, which "does not care" about distribution-specific packaging, versioning, it's upgrades, etc,. I would recommend to redistribute dependencies manually.
You can use RPATH linker option, by it's setting value to $ORIGIN you will tell linker to search libraries in directory, relative to that binary file, without need to pre-set LD_LIBRARY_PATH before execution:
gcc -Wl,-rpath,'$ORIGIN/../lib'
Example taken from here.