Puppet get username from Configuration file using puppet - linux

I need to build a RPM for a set of libraries, but the problem is I can compile and build it in Dev server and need to deploy in all QA, PROD and test servers which has different user names, so I need to build my rpm to contain and pick username for respective servers during installation, can someone help me with this. How to do this using puppet..? Is that possible?

This cannot be done cleanly with stock RPMs.
As a hack, you can have a %post script chown all the files after detecting which username to use, but that's going to cause errors later when you do things like try to verify the RPM installation.
I'd recommend (a) having a new username that you use for the RPM/service across all servers or (b) building separate RPMs for each target environment.

Related

Build a debian package with user settings

I'm packaging a PyQt application for Linux as a .deb package, following the Debian maintenance guide.
The manual does a good job describing how to build the python binaries with debuild -b, and install global data files in /usr/share/<package>/ through the debian/install file. However, I don't see any mention of installing user settings files - cache files or files for configuration changes that the current user running the program might want to save.
As far as I understand, other programs usually save these in a hidden directory on the users home path - eg atom's user data in /home/<username>/.atom/.
The manual does mention conffiles. However these seem to be globally installed. I'm also not sure if they're suitable for config files that change frequently as a result of user actions, since package updates will attempt to solve conflicts between new and existing conffiles.
Some other documentation mentions postinstall scripts, but this seems potentially too complicated for something that should be common to many debian packages?

Installing Node in a linux grid server

So some background, I'm installing Node on a host server, but it's a grid server not a server that's solely for my website.
The grid server doesn't have a root user/ administrative powers. So to install node I found this workaround: http://iantearle.com/blog/media-temple-grid-and-nodejs . It's a Linux Grid server, I've never used Linux so if someone could explain to me what the commands mean, especially: ./configure --prefix=~/opt/
Lastly I followed the steps but when I try to run the node command in the server it says node:command not found - which is why I'm trying to understand the steps. Thanks
To explain the process:
Configure
The configure script is responsible for getting ready to build the software on your specific system. It makes sure all of the dependencies for the rest of the build and install process are available, and finds out whatever it needs to know to use those dependencies.
Unix programs are often written in C, so we’ll usually need a C compiler to build them. In these cases the configure script will establish that your system does indeed have a C compiler, and find out what it’s called and where to find it.
Make
Once configure has done its job, we can invoke make to build the software. This runs a series of tasks defined in a Makefile to build the finished program from its source code.
The tarball you download usually doesn’t include a finished Makefile. Instead it comes with a template called Makefile.in and the configure script produces a customised Makefile specific to your system.
3.Make Install
Now that the software is built and ready to run, the files can be copied to their final destinations. The make install command will copy the built program, and its libraries and documentation, to the correct locations.
--prefix=~/opt/ -> will set the build directory to /home/yourhome/opt directory.
Now if you didnt get errors while doing those 3 steps explained above make sure you did the following:
nano ~/.bash_profile
export PATH=~/opt/bin:${PATH}
nano is a text editor and you are opening .bash_profile file with it.
you need to add export PATH=~/opt/bin:${PATH} in that file and save it using ctrl+x
Then restart your terminal.
Specified github repository for nodejs is outdated. use the following link instead.
git clone https://github.com/nodejs/node.git
P.S node:command not found usually happens when the program is not installed correctly or it's executable isnt in your terminal's PATH variable.

installing a 3rd party module on a puppet agent

We have a puppetmaster and an agent machine I'm configuring from it, via the puppet agent -t command.
On this agent machine (an Ubuntu box) I need the bc (base calculator) command installed when it's built. Right now that's not the case.
There appears to be a module for it on the forge (https://forge.puppetlabs.com/rfletcher/bc/readme) but I'm fairly new to puppet and am not sure how to set things up so that when the Ubuntu box is spun up this module is installed?
I'm going over the agent docs but am still learning about how agents communicate with puppetmasters. I'm hoping for a nudge on what to do to make sure this command is installed on my agent when all is said and done (stick something in a manifest somewhere most likely?)
So you are asking how to use forge modules in puppet. My first suggestion is, read the relative documents as more as possible, all in Puppet Forge
If you need get quick start, here are something you can try.
login puppet master
cd to puppet module folder
puppet module install rfletcher-bc
mv rfletcher-bc bc
then find the node pp file (normally it should be init.pp) to add below line:
include bc
I didn't have your environment, and not sure which pp file will be targeted.

Build environment isolation and file system diffing

Alright so after trying to chase down the dependencies for various pieces of software for the n-th time and replicating work that various people do for all the different linux distributions I would like to know if there is a better way of bundling various pieces of software into one .rpm or .deb file for easier distribution.
My current set up for doing this is a frankenstein monster of various tools but mainly Vagrant and libguestfs (built from source running in Fedora because none of the distributions actually ship it with virt-diff). Here are the steps I currently follow:
Spin up a base OS using either a Vagrant box or by create one from live CDs.
Export the .vmdk and call it base-image.
Spin up an exact replica of the previous image and go to town: use the package manager,
or some other means, to download, compile, and install all the pieces that I need. Once again, export the .vmdk and call it non-base-image.
Make both base images available to the Fedora guest OS that has libguestfs.
Use virt-diff to diff the two images and dump that data to file called diff.
Run several ruby scripts to massage diff into another format that contains the information I need and none of the stuff I don't like things in /var.
Run another script to generate a command script for guestfish with a bunch of copy-out commands.
Run the guestfish script.
Run another script to regenerate the symlinks from diff because guestfish can't do it.
Turn the resulting folder structure into a .deb or .rpm file and ship it.
I would like to know if there is a better way to do this. You'd think there would be but I haven't figured it out.
I would definitely consider something along the lines of:
A)
yum list (select your packages/dependencies whatever)
use yumdownloader on the previous list (or use th pkgs you have already downloaded)
createrepo
ship on media with install script that adds the cd repo to repolist, etc.
or B)
first two steps as above, then pack the rpms into an archive build a package that contains all of the above and kicks off the actual install of the rpms (along the lines of rpm -Uvh /tmp/repo/*) as a late script (in the cleanup phase, maybe). Dunno if this can be done avoiding locks on the rpm database.
I think you reached the point of complexity - indeed a frankenstein monster - where you should stop fearing of making proper packages with dependencies. We did this in my previous work - we had a set of fabricated rpm packages - and it was very easy and straightforward, including:
pre/post install scripts
uninstall scripts
dependencies
We never had to do anything you just described. And for the customer, installing even a set of packages was very easy!
You can follow a reference manual of how to build RPM package for more info.
EDIT: If you need a single installation package, then create this master packge, that would contain all the other packages (with dependencies set properly) and installed them in the post-install script (and uninstalled them in the uninstall script).
There are mainly 3 steps to make a package with all dependencies (let it be A, B & C).
A. Gather required files.
There are many way to gather files of the main software and its dependencies. In order to get all the dependices and for error free run you need to use a base OS (i.e live system)
1. Using AppDirAssistant
This app is used by www.portablelinuxapps.org to create portable app directory. They scan and watch for the files accessed by the app to find required.
2. Using chroot & overlayfs
In this method you don't need to boot into live cd instead chroot into it.
a. mount the .iso # /cdrom and
b. mount the filesystem(filesystem.squashfs) # another place, say # /tmp/union/root
c. Bind mount /proc # /tmp/union/root/proc
d. Overlay on it
mount -t overlayfs overlayfs /tmp/union/root -o lowerdir=/tmp/union/root,upperdir=/tmp/union/rw
e. Chroot
chroot /tmp/union/root
Now you can install packages using apt-get or another method (only from the chrooted terminal). All the changed files are stored # /tmp/union/rw. Take files from there.
3. Using manually collected packages
Use package manager to collect dependencies. For example
apt-get install package --print-uris will print download uris for dep packages. Using this uris download packages and extract all (dpkg -x 1.deb ./extracted).
B. Clean garbage files
After gathering files remove unwanted files
C. Pack files
1. Using appImageAssistance
If you manually gathered files then you need to copy appname.desktop file from ./usr/share/applications to root of directory tree. Also copy file named AppRun from another app or extract it from AppDirAssistance.
2. Make a .deb or .rpm using gathered files.
Is the problem primarily that of ensuring that your customers have installed all the standard upstream distro packages necessary for your package to run?
If that's the case, then I believe the most straightforward solution would be to leverage the yum and apt infrastructure to have those tools track down and install the necessary prerequisite packages.
If you supply a native yum/apt repository with complete pre-req specs (the hard work you've apparently already completed). Then the standard system install tool takes care of the rest. See link below for more on creating a personal repository for yum/apt.
For off-line customers, you can supply media with your software, and a mirror - or mirror subset - of the upstream distro, and instructions for adding them to yum config/apt config.
Yum
Creating a Yum Repository in the Fedora Deployment Guide
Apt
How To Setup A Debian Repository on the Debian Wiki
So your customers aren't ever going to install any other software that might specify a different version of those dependencies that you are walking all over, right?
Why not just create your own distro if you're going to go that far?
Or you can just give them a bunch of packages and a single script that does rpm -i dep1 dep2 yourpackage

Automating an install of Apache Ant

I've manually installed ANT on many servers simply by unzipping the ant files in a location and setting up the ~/.bash_profile to configure the users' path to see it.
I need to automate the setup now on servers which do not have internet connectivity.
We are using Nolio for deployment, but I don't care if the automation is done via nolio. If it can be scripted, I can easily just make Nolio call the script.
I don't think editing the users' .bash_profiles is a good way to do the automation.
So, assuming I get Ant on to the servers and unzip it, what's the best way to install it so that all users will have access to it?
You can try using pssh (parallel ssh). It's pretty awesome. Create a file with all your remote hosts, run:
pssh -h "command1 && command2 && command3"
You can use pscp to deliver scripts, then use pssh to execute them. Works very well. Alternatively, you could become a puppet master and work everything off puppet. You can do some cool stuff with it, like automating builds based on hostname convention. LAMP build? Name the host web01.blarg.awesome or whatever, setup puppet to recognize it based on a regex, then deliver the appropriate packages.
GL.

Resources