Find file(s) in a directory on the node by pattern in Puppet - puppet

How can I find a file in a directory on the node by using shell-patterns or regex?
What I want to do:
I download a tar file to /tmp/myfiles on the appropriate client and unpack this archive. From it come several deb files (about 10 of them). The filenames change with time, because there are version numbers integrated in the name.
The file names looks like:
package1_8.0-22.65.linux.x86_64.deb
package2_6.5-23.89.linux.x86_64.deb
I need to identify some of them (not all) to be able to install them via package with provider => dpkg.
The packages (like package1, package2) do not occur multiple times with different version numbers, so matching could be done easily without having to compare version numbers:
Shell pattern: package1_*.linux.x86_64.deb
Regex: ^package1_.+\.linux\.x86_64\.deb$
Is there a command or module in Puppet to find files by match pattern in a directory?
Or can I grab the result of exec with command => "ls /tmp/myfiles/..." and evaluate it?

Supposing that you mean that the tar file is downloaded as part of the catalog run, what you describe is not possible with Puppet and its standard Package resource type. Keep in mind the catalog-request cycle:
The client gathers node facts.
The client submits a catalog request bearing the gathered facts.
The server uses the node's identity and facts, the manifests of the appropriate environment, and possibly the output of an external node classifier to build a catalog for the client.
The client applies the catalog.
To create Package resources during step (3), Puppet needs to know at least the names of the packages. As you describe it, the names of the packages cannot be determined until step (4), which is too late.
You could have Puppet use the packages downloaded on a previous run by defining a custom fact that gathers their names from the filesystem, but that's risky and failure prone.
I think the whole endeavor is highly suspect and likely to cause you trouble. It would probably be better to put your packages in a local repository at your site and use either node data or a metapackage or both to control which are installed. But if you insist on downloading a tar full of DEBs, then unpacking and installing them, then your best bet is probably to use an Exec for the installation. Something like this:
# Ensure that the TAR file is in place
file { ${download_dir}:
ensure => 'directory',
# ...
}
file { "${download_dir}/packages.tar":
ensure => 'file',
# ...
}
-> exec { 'Install miscellaneous DEBs':
path => ['/bin', '/usr/bin', '/sbin', /usr/sbin'],
command => "rm '${download_dir}'/*.deb || :; tar xf '${download_dir}'/packages.tar && apt-get install '${download_dir}'/*.deb",
provider => 'shell',
}

Related

I am using coverity to analyse node-ts template for a service. What should I use to build it?

Steps:
Installed coverity
Configured compiler
cov-configure --javascript
cov-configure --cs
I am stuck at the build step of cov-build. Yarn is used to run and configure the service. But I am not sure what coverity wants here.
I tried a couple of npm run commands, every time end up getting this:
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
I also tried different compilers, but no luck.
What should be done in this case?
You need to do a file system capture for Javascript files. You can accomplish this by running cov-build with the --no-command flag.
cov-build --dir CoverityIntermedediateDir --no-command --fs-capture-list list.txt
Lets break down these commands:
--dir: intermediate directory to store the emitted results (used for cov-analyze later).
--no-command: Do not run a build command and to look for certain file types
--fs-capture-list: Use the file that is provided to specify which files to look at and possibly emit to the intermediate directory.
A recommended way to generate the list.txt file is to grab it from your source control. If using git run:
git ls-files > list.txt
I want to also point out that if you don't have a convenient way to get a file listing in order to use the --fs-capture-list command you can use --fs-capture-search command and pair that with a filter to exclude the node_modules directory.
The coverity forums have some useful questions and answers:
Node.js File system capture
Really, the best place to look is at the documentation. There are several examples of what you want to do in their guides.

dpkg-buildpackage error: missing files (usr/include/*)

I am trying to build a dev package out of some .so files - i.e libraries.
dh_make -f ../ mylibs.tar.gz
then I choose
-l
since I want a library package.
A debian folder is created which consists of among others the controlfile and the install file.
When I finally create the package with
dpkg-buildpackage
I get the following error message:
dh_install: mylibs-dev missing files (usr/include/*), aborting
I dont understand what is missing since the headers are copied to
usr/include
I have checked the directory and the headers are copied there. So why do I get this error message?
the install-file look like this:
usr/lib/lib*.so.*
So - may the path to usr/include be missing here? I've tried with
usr/include/h.*
but it does not work
The debian install files are most often used for source packages that produce several binary packages. In that case, dh_auto_install installs to the directory debian/tmp/ and dh_install moves the files from there to each package. However, when there is a single package, dh_auto_install takes a shortcut and installs directly to debian/package-name/. In that case, you only need to list in the install file the files that make install missed (usually none). If you list the regular, installed files, dh_install will get confused because it is still looking for them in debian/tmp/.
It is possible to override either the directory where dh_auto_install installs, or the directory where dh_install is looking, but you probably don't need either.

Build environment isolation and file system diffing

Alright so after trying to chase down the dependencies for various pieces of software for the n-th time and replicating work that various people do for all the different linux distributions I would like to know if there is a better way of bundling various pieces of software into one .rpm or .deb file for easier distribution.
My current set up for doing this is a frankenstein monster of various tools but mainly Vagrant and libguestfs (built from source running in Fedora because none of the distributions actually ship it with virt-diff). Here are the steps I currently follow:
Spin up a base OS using either a Vagrant box or by create one from live CDs.
Export the .vmdk and call it base-image.
Spin up an exact replica of the previous image and go to town: use the package manager,
or some other means, to download, compile, and install all the pieces that I need. Once again, export the .vmdk and call it non-base-image.
Make both base images available to the Fedora guest OS that has libguestfs.
Use virt-diff to diff the two images and dump that data to file called diff.
Run several ruby scripts to massage diff into another format that contains the information I need and none of the stuff I don't like things in /var.
Run another script to generate a command script for guestfish with a bunch of copy-out commands.
Run the guestfish script.
Run another script to regenerate the symlinks from diff because guestfish can't do it.
Turn the resulting folder structure into a .deb or .rpm file and ship it.
I would like to know if there is a better way to do this. You'd think there would be but I haven't figured it out.
I would definitely consider something along the lines of:
A)
yum list (select your packages/dependencies whatever)
use yumdownloader on the previous list (or use th pkgs you have already downloaded)
createrepo
ship on media with install script that adds the cd repo to repolist, etc.
or B)
first two steps as above, then pack the rpms into an archive build a package that contains all of the above and kicks off the actual install of the rpms (along the lines of rpm -Uvh /tmp/repo/*) as a late script (in the cleanup phase, maybe). Dunno if this can be done avoiding locks on the rpm database.
I think you reached the point of complexity - indeed a frankenstein monster - where you should stop fearing of making proper packages with dependencies. We did this in my previous work - we had a set of fabricated rpm packages - and it was very easy and straightforward, including:
pre/post install scripts
uninstall scripts
dependencies
We never had to do anything you just described. And for the customer, installing even a set of packages was very easy!
You can follow a reference manual of how to build RPM package for more info.
EDIT: If you need a single installation package, then create this master packge, that would contain all the other packages (with dependencies set properly) and installed them in the post-install script (and uninstalled them in the uninstall script).
There are mainly 3 steps to make a package with all dependencies (let it be A, B & C).
A. Gather required files.
There are many way to gather files of the main software and its dependencies. In order to get all the dependices and for error free run you need to use a base OS (i.e live system)
1. Using AppDirAssistant
This app is used by www.portablelinuxapps.org to create portable app directory. They scan and watch for the files accessed by the app to find required.
2. Using chroot & overlayfs
In this method you don't need to boot into live cd instead chroot into it.
a. mount the .iso # /cdrom and
b. mount the filesystem(filesystem.squashfs) # another place, say # /tmp/union/root
c. Bind mount /proc # /tmp/union/root/proc
d. Overlay on it
mount -t overlayfs overlayfs /tmp/union/root -o lowerdir=/tmp/union/root,upperdir=/tmp/union/rw
e. Chroot
chroot /tmp/union/root
Now you can install packages using apt-get or another method (only from the chrooted terminal). All the changed files are stored # /tmp/union/rw. Take files from there.
3. Using manually collected packages
Use package manager to collect dependencies. For example
apt-get install package --print-uris will print download uris for dep packages. Using this uris download packages and extract all (dpkg -x 1.deb ./extracted).
B. Clean garbage files
After gathering files remove unwanted files
C. Pack files
1. Using appImageAssistance
If you manually gathered files then you need to copy appname.desktop file from ./usr/share/applications to root of directory tree. Also copy file named AppRun from another app or extract it from AppDirAssistance.
2. Make a .deb or .rpm using gathered files.
Is the problem primarily that of ensuring that your customers have installed all the standard upstream distro packages necessary for your package to run?
If that's the case, then I believe the most straightforward solution would be to leverage the yum and apt infrastructure to have those tools track down and install the necessary prerequisite packages.
If you supply a native yum/apt repository with complete pre-req specs (the hard work you've apparently already completed). Then the standard system install tool takes care of the rest. See link below for more on creating a personal repository for yum/apt.
For off-line customers, you can supply media with your software, and a mirror - or mirror subset - of the upstream distro, and instructions for adding them to yum config/apt config.
Yum
Creating a Yum Repository in the Fedora Deployment Guide
Apt
How To Setup A Debian Repository on the Debian Wiki
So your customers aren't ever going to install any other software that might specify a different version of those dependencies that you are walking all over, right?
Why not just create your own distro if you're going to go that far?
Or you can just give them a bunch of packages and a single script that does rpm -i dep1 dep2 yourpackage

How to "repackage" a RPM file for example cpio2rpm without installing the RPM?

I'm able to extract files from a RPM file, but how do I "rebuild" it, for example cpio2rpm?
I have extracted RPM file using following command.
rpm2cpio theFileName.rpm | cpio –idmv
I have to modify the few web application files like *.php, *.html or .js. These files don’t require any source recompilation. So I would like to replaces or change these files with modification without rebuilding rpm. Since, I need to do this for multiple platforms like Redhat Linux and SUSE, and multiple architecture like 32 and 64 bit OS.
I am expecting to do these changes on only on system and without rebuild rpm and there would not be have target system architecture dependency (like i386, 64).
I am not looking like command rpmbuild –rebuild the.src.rpm since, I don’t have source. I need to be rebuild binary .RPM file(not source .rpm)
I want to do this without source and platform or architecture independent and without using spec file if possible.
Any buddy, could you please suggest any solution or any free tools.
Thank you to all whoever spends time to read and reply to my thread.
You can use rpmrebuild to modify an actual rpm file (it doesn't need to be installed).
Most of the examples for this use complicated inline edit commands to modify known files in particular ways, but you can use a normal editor. I used this to fix a shell script in an rpm file that I didn't have the source for. Call the command as
rpmrebuild -ep theFileName.rpm
This puts you in an editor with the spec file for the RPM. The name of the file will be something like ~/.tmp/rpmrebuild.12839/work/spec.2. If you look in, in this example, ~/.tmp/rpmrebuild.12839/work, you will find all of the files used to make the RPM (in my case, the file was in root/usr/sbin within that directory). So, go to another window, cd to that directory, and edit any files you need to change.
When you have finished editing files, go back to the edit window with the spec file, make any changes you need to that file (I didn't have any, since I wasn't adding or deleting files), save the file, and say "y" to the "Do you want to continue" question. It will then build a new RPM file, and tell you where it has put it (in my case, in ~/rpmbuild/RPMS/x86_64/)
You can repackage an installed RPM (including modified files) using rpmrebuild. http://rpmrebuild.sourceforge.net/
Obviously your binaries (if any) would have to be platform/architecture independent to work on all the OS flavors you're hoping for, but it sounds like if they're just web files that shouldn't be a problem.
Principially you can pack everything you want into a RPM file. Just treat what you have as "source" and write a SPEC file which puts the data where the compiled binaries would normally go.
Concerning RPM, I consider "source" "what I have" and "binary" "what I need to run". Not very exact terminology, but it helps working with RPMs.
Your spec file looks like any other spec file, what concerns the parameters etc. But the code part is different:
[...]
%prep
# Here you either have nothing to do or you already unpack the cpio and possibly modify it.
# %build can be omitted
%install
[ "${buildroot}" != "/" ] && [ -d ${buildroot} ] && rm -rf ${buildroot};
# Here you can either unpack the cpio or copy the data unpacked in %prep.
# Be careful to put it into %{buildroot} or $RPM_BUILD_ROOT.

Why does my hand created deb package fails at install with "unable to create" on files?

I made a perl script that creates a deb binary package from scratch.
I created the data, control, etc.
But when I run dpkg -i on my deb package it complains that it is unable to files from data.
Example:
unable to create '.dpkg-new'(while processing ''): No such file or directory.
I have downloaded some .deb packages to look at and they do not use the preinst script to create the directory structure.
I am thinking I am doing something wrong, I consider having to create my own directories in preinst but it does not seem right... perhaps I am missing something?
Do I have to create directories where my files from data will be copied in the preinst sh, or should dpkg do it and I am doing something wrong?
I had the same problem in a Ruby script I wrote.
I was generating a list of files to pass to tar when building the data.tar.gz archive. When I ungzip and untared the archive manually it would deflate fine, but dpkg would fail.
It turns out that the list of files must also include each directory to create as well.
Note that when I created data.tar.gz I built it with nearly the same options as dpkg-deb/build.c does in the dpkg-1.15.8.11 source.
execlp(TAR, "tar", "-cf", "-", "--format=gnu", "--null", "-T", "-", "--no-recursion", NULL);
Instead I used
IO.popen("gnutar -czf - --format=gnu -T - --no-recursion", "r+")
In addition to what #Arrowmaster said, check the http://www.debian.org/doc/debian-policy/ for some detailed explanation of the files. After you build the package itself, you can check it with lintian tool to see if there is anything obvious you might have missed.
If any one looks for a solution to the problem:
"Build a deb package from an rpm spec file."
Look here http://www.deepnet.cx/debbuild/
I have made my own perl build script much simple then the mentioned one so I can easily maintain it.
Some useful knowledge gained in the process:
0. the deb is an ar archive that contains 3 files, the order of the files is important.
1. the scripts from control.tar.gz must be made executable.
2. it is good to have a preinstall script to make directories if dirs do not exist.
3. sometimes dpkg decides to unzip your zips (this happened if the zip was the only file in the data.tar.gz) so check for that in an postinstall script.
4. when you tar.gz some files be sure to chmod to the dir that contains the directory structure for your tar.
You should not attempt to manually create a .deb binary package by hand. The Debian New Maintainers' Guide covers how to get started creating a Debian package using the correct tools.
Your hand created package may look correct to you but because it is not installing it is obviously flawed in either a minor way that you have not noticed or in a more serious way that is not visible to you (for example most people don't realize a .deb is actually an ar archive).
There are lots of reasons for this. You really need to run:
dpkg -i -D1110 mydeb.deb
And post the result to have any hope of someone being able to solve the problem.

Resources