I created a derivation and added it to the nix store, now how do I remove it? - nixos

I followed the 7th Nix pill tutorial, and created a derivation that placed an executable in the nix store, i.e. /nix/store/gh66mkic4c1dys8ag8yqnv10x59b7vmh-simple/simple.
I can run that executable, either directly or via symlinks to it. However, how do I remove it? I tried deleting old generations with $ nix-env --delete-generations old, and also garbage collecting with nix-store --gc, but my derivation's output still appears at that path and can be run there.
Now that I've completed the tutorial, how do I get rid of what I've created in the nix store? Does nixos ever clean up such old derivations? Does it need to be somehow marked as irrelevant before running the delete-old-generations or garbage-collect commands?

Garbage collection deletes everything that isn't reachable from any GC root. This means that if something sticks around, you there's a GC root somewhere that you're not thinking of. You can find these with the nix-store -q --roots command:
For example, here's why my emacs is "alive":
$ nix-store -q --roots /nix/store/hwial1dr7sd6ydf81d465jrllxn4gpdm-emacs-with-packages-27.2/bin/emacs
/nix/var/nix/gcroots/per-user/user/current-home -> /nix/store/kl2l02697jxy9mzf5yz72ph18hh0vgsd-home-manager-generation
/nix/var/nix/profiles/per-user/user/home-manager-3-link -> /nix/store/kl2l02697jxy9mzf5yz72ph18hh0vgsd-home-manager-generation
/nix/var/nix/profiles/per-user/user/profile-25-link -> /nix/store/fds0f002lq8sxj0kqj9zq6d7vdakbakf-user-environment
/proc/6695/maps -> /nix/store/z7hr6k4apccj824pvymlyma8dppz7f16-home-manager-path
/proc/3589/maps -> /nix/store/z7hr6k4apccj824pvymlyma8dppz7f16-home-manager-path
The first two roots have been created by home-manager, the third by the nix-env -i command that home-manager uses. The ones from /proc are memory-mapped files in active processes.

Related

how to add creating protobuf python files [duplicate]

I'm trying to use add_custom_command to generate a file during the build. The command never seemed to be run, so I made this test file.
cmake_minimum_required( VERSION 2.6 )
add_custom_command(
OUTPUT hello.txt
COMMAND touch hello.txt
DEPENDS hello.txt
)
I tried running:
cmake .
make
And hello.txt was not generated. What have I done wrong?
The add_custom_target(run ALL ... solution will work for simple cases when you only have one target you're building, but breaks down when you have multiple top level targets, e.g. app and tests.
I ran into this same problem when I was trying to package up some test data files into an object file so my unit tests wouldn't depend on anything external. I solved it using add_custom_command and some additional dependency magic with set_property.
add_custom_command(
OUTPUT testData.cpp
COMMAND reswrap
ARGS testData.src > testData.cpp
DEPENDS testData.src
)
set_property(SOURCE unit-tests.cpp APPEND PROPERTY OBJECT_DEPENDS testData.cpp)
add_executable(app main.cpp)
add_executable(tests unit-tests.cpp)
So now testData.cpp will generated before unit-tests.cpp is compiled, and any time testData.src changes. If the command you're calling is really slow you get the added bonus that when you build just the app target you won't have to wait around for that command (which only the tests executable needs) to finish.
It's not shown above, but careful application of ${PROJECT_BINARY_DIR}, ${PROJECT_SOURCE_DIR} and include_directories() will keep your source tree clean of generated files.
Add the following:
add_custom_target(run ALL
DEPENDS hello.txt)
If you're familiar with makefiles, this means:
all: run
run: hello.txt
The problem with two existing answers is that they either make the dependency global (add_custom_target(name ALL ...)), or they assign it to a specific, single file (set_property(...)) which gets obnoxious if you have many files that need it as a dependency. Instead what we want is a target that we can make a dependency of another target.
The way to do this is to use add_custom_command to define the rule, and then add_custom_target to define a new target based on that rule. Then you can add that target as a dependency of another target via add_dependencies.
# this defines the build rule for some_file
add_custom_command(
OUTPUT some_file
COMMAND ...
)
# create a target that includes some_file, this gives us a name that we can use later
add_custom_target(
some_target
DEPENDS some_file
)
# then let's suppose we're creating a library
add_library(some_library some_other_file.c)
# we can add the target as a dependency, and it will affect only this library
add_dependencies(some_library some_target)
The advantages of this approach:
some_target is not a dependency for ALL, which means you only build it when it's required by a specific target. (Whereas add_custom_target(name ALL ...) would build it unconditionally for all targets.)
Because some_target is a dependency for the library as a whole, it will get built before all of the files in that library. That means that if there are many files in the library, we don't have to do set_property on every single one of them.
If we add DEPENDS to add_custom_command then it will only get rebuilt when its inputs change. (Compare this to the approach that uses add_custom_target(name ALL ...) where the command gets run on every build regardless of whether it needs to or not.)
For more information on why things work this way, see this blog post: https://samthursfield.wordpress.com/2015/11/21/cmake-dependencies-between-targets-and-files-and-custom-commands/
This question is pretty old, but even if I follow the suggested recommendations, it does not work for me (at least not every time).
I am using Android Studio and I need to call cMake to build C++ library. It works fine until I add the code to run my custom script (in fact, at the moment I try to run 'touch', as in the example above).
First of,
add_custom_command
does not work at all.
I tried
execute_process (
COMMAND touch hello.txt
)
it works, but not every time!
I tried to clean the project, remove the created file(s) manually, same thing.
Tried cMake versions:
3.10.2
3.18.1
3.22.1
when they work, they produce different results, depending on cMake version, one file or several. This is not that important as long as they work, but that's the issue.
Can somebody shed light on this mystery?

Why can a directory be removed when my shell is using that directory?

I don't know if what I encountered was a bug or is intended behavior, but luckily I figured out pretty quick what was going on.
I had my shell cd'd inside a subdirectory of the git repo, and I performed a git rebase -i squash operation in which the commits involved included the creation of this directory.
After that operation completed without incident, the shell became in an orphaned state where the git status (neither in the zsh theme helper RPROMPT nor when run) indicated that I wasn't even in a git repository anymore.
Once I ran cd .. everything was fine, and it all made sense. During the course of the rebasing operation, Git had rm'd the directory that I was in (at the first step of the rebase) and subsequently put it back. Since it got put back, I could also have ran cd $(pwd) to go there in one step.
This is some confusing behavior. Although, it is not clear that anything was technically done wrong by git. My question is would this be a bug in git or should users be expected to know how to deal with this situation?
Also, the wider, actual root question: Why is it permitted to remove a directory that my shell is in, when it is not permitted to eject a disk if I have a shell on a mounted directory? It seems inconsistent to me.
Case in point: fuser <directory> shows current directory uses. If a program is "on" a directory it is "using" it.
Terminal 1:
$ cd tmp/
$ mkdir test
$ cd test
Terminal 2:
$ rmdir tmp/test
Terminal 1:
$ ls
sh: 0: getcwd() failed: No such file or directory
Inconsistent, yes. But allowed.
PS. And this has nothing to do with git.
To answer your first question:
It is not a bug, it is how Unix systems, including Linux, have always worked.
There's even some nice text in the POSIX spec:
[EBUSY] The directory to be removed is currently in use by the system or some process and the implementation considers this to be an error.
Namely, some implementation (e.g. Windows...) can call it an error, but most implementations. namely Unix variants, do not.
The way this is implemented in a filesystem is by having the processes hold a reference on the object which represents the directory. Another thing that holds such a reference is the parent directory. git rebase has removed the latter reference, but the former remains. Even if the directory is re-created, it is a new filesystem object, whereas your shell was holding a reference to the old object.
That's why cd .. and cd $(pwd) still work - they re-lookup the directory and grab the new reference, and release the old reference.
Officially the old object is not cleaned up until the old reference is released. That means that the metadata of the directory is not removed from the disk until the process releases the old working directory.
To answer your second question:
In order to eject a drive you have to unmount the mountpoint, and as above, the processes you find with fuser hold references to that mountpoint.
As with directory references, a amount point cannot be cleaned up until all references to it have been removed. That is why you cannot eject a drive which has references.
The consistent thing to do would be to allow detaching the filesystem from the directory tree without actually cleaning up. You still won't be able to eject the drive, but at least you can use the directory tree as you wish.
Well, that is actually possible to do with umount --lazy.

Is there an easy way to verify/compare installation against an RPM file?

I'd like to compare the contents of an RPM to an installed system (that professes to have the RPM already installed), looking for any files/directories that may have be different (or missing) -- something like pkgchk on Solaris.
It looks like rpm -V can be used to compare the system's filesystem(s) against the system's RPM database -- but I want to be able to compare an offline "golden" RPM with what's on the system (e.g., to avoid depending on a potentially tainted on-system RPM database or on incorrect version information from pre-release RPM files).
I know I could write something to unpack the RPM and then walk through the contents, comparing everything. But is there any existing tool that can do the comparison in-situ?
After looking through the "Similar Questions", I found a reference to a way to do this, add the "-p" option to the command to yield: rpm -Vp some*.rpm. I'd missed the aside on the man page saying that the query options were also applicable to the verification operation.

FreeBSD: pkg_delete -a by accident. How to restore?

I have accidently ran "pkg_delete -a" on FreeBSD 9.1 .
Is there anyway to restore this operation or revert backwards?
And if not, is there some way to copy the pkg installed on another server? (there are basically 4 servers that are alike they all contain the same packages, this operation only performed on one of them).
You could try bpkg script to generate binary packages for all installed programs. See documentation for -b/-B options.

Build environment isolation and file system diffing

Alright so after trying to chase down the dependencies for various pieces of software for the n-th time and replicating work that various people do for all the different linux distributions I would like to know if there is a better way of bundling various pieces of software into one .rpm or .deb file for easier distribution.
My current set up for doing this is a frankenstein monster of various tools but mainly Vagrant and libguestfs (built from source running in Fedora because none of the distributions actually ship it with virt-diff). Here are the steps I currently follow:
Spin up a base OS using either a Vagrant box or by create one from live CDs.
Export the .vmdk and call it base-image.
Spin up an exact replica of the previous image and go to town: use the package manager,
or some other means, to download, compile, and install all the pieces that I need. Once again, export the .vmdk and call it non-base-image.
Make both base images available to the Fedora guest OS that has libguestfs.
Use virt-diff to diff the two images and dump that data to file called diff.
Run several ruby scripts to massage diff into another format that contains the information I need and none of the stuff I don't like things in /var.
Run another script to generate a command script for guestfish with a bunch of copy-out commands.
Run the guestfish script.
Run another script to regenerate the symlinks from diff because guestfish can't do it.
Turn the resulting folder structure into a .deb or .rpm file and ship it.
I would like to know if there is a better way to do this. You'd think there would be but I haven't figured it out.
I would definitely consider something along the lines of:
A)
yum list (select your packages/dependencies whatever)
use yumdownloader on the previous list (or use th pkgs you have already downloaded)
createrepo
ship on media with install script that adds the cd repo to repolist, etc.
or B)
first two steps as above, then pack the rpms into an archive build a package that contains all of the above and kicks off the actual install of the rpms (along the lines of rpm -Uvh /tmp/repo/*) as a late script (in the cleanup phase, maybe). Dunno if this can be done avoiding locks on the rpm database.
I think you reached the point of complexity - indeed a frankenstein monster - where you should stop fearing of making proper packages with dependencies. We did this in my previous work - we had a set of fabricated rpm packages - and it was very easy and straightforward, including:
pre/post install scripts
uninstall scripts
dependencies
We never had to do anything you just described. And for the customer, installing even a set of packages was very easy!
You can follow a reference manual of how to build RPM package for more info.
EDIT: If you need a single installation package, then create this master packge, that would contain all the other packages (with dependencies set properly) and installed them in the post-install script (and uninstalled them in the uninstall script).
There are mainly 3 steps to make a package with all dependencies (let it be A, B & C).
A. Gather required files.
There are many way to gather files of the main software and its dependencies. In order to get all the dependices and for error free run you need to use a base OS (i.e live system)
1. Using AppDirAssistant
This app is used by www.portablelinuxapps.org to create portable app directory. They scan and watch for the files accessed by the app to find required.
2. Using chroot & overlayfs
In this method you don't need to boot into live cd instead chroot into it.
a. mount the .iso # /cdrom and
b. mount the filesystem(filesystem.squashfs) # another place, say # /tmp/union/root
c. Bind mount /proc # /tmp/union/root/proc
d. Overlay on it
mount -t overlayfs overlayfs /tmp/union/root -o lowerdir=/tmp/union/root,upperdir=/tmp/union/rw
e. Chroot
chroot /tmp/union/root
Now you can install packages using apt-get or another method (only from the chrooted terminal). All the changed files are stored # /tmp/union/rw. Take files from there.
3. Using manually collected packages
Use package manager to collect dependencies. For example
apt-get install package --print-uris will print download uris for dep packages. Using this uris download packages and extract all (dpkg -x 1.deb ./extracted).
B. Clean garbage files
After gathering files remove unwanted files
C. Pack files
1. Using appImageAssistance
If you manually gathered files then you need to copy appname.desktop file from ./usr/share/applications to root of directory tree. Also copy file named AppRun from another app or extract it from AppDirAssistance.
2. Make a .deb or .rpm using gathered files.
Is the problem primarily that of ensuring that your customers have installed all the standard upstream distro packages necessary for your package to run?
If that's the case, then I believe the most straightforward solution would be to leverage the yum and apt infrastructure to have those tools track down and install the necessary prerequisite packages.
If you supply a native yum/apt repository with complete pre-req specs (the hard work you've apparently already completed). Then the standard system install tool takes care of the rest. See link below for more on creating a personal repository for yum/apt.
For off-line customers, you can supply media with your software, and a mirror - or mirror subset - of the upstream distro, and instructions for adding them to yum config/apt config.
Yum
Creating a Yum Repository in the Fedora Deployment Guide
Apt
How To Setup A Debian Repository on the Debian Wiki
So your customers aren't ever going to install any other software that might specify a different version of those dependencies that you are walking all over, right?
Why not just create your own distro if you're going to go that far?
Or you can just give them a bunch of packages and a single script that does rpm -i dep1 dep2 yourpackage

Resources