I am new to Puppet - I have been playing around learning the basics. Most of the examples ( except the very basic ones ) that are on the puppet page do not work for me - either some dependency is missing or package is not found. I do not see the logs explaining what went wrong ( even if I run the --test or --verbose option)
Can anyone clarify
1 What is the simplest process ( set of simple steps ) for installing a rpm package on a single Linux box ?
2 In general - how does one go about using the modules on the forge.puppetlabs ? Are the providers for these packages installed automatically or they have to be manually installed first ?
To install a package named pacman from command line:
puppet resource package pacman ensure=present
Corresponding puppet code will look like:
package { 'pacman':
ensure => '4.0.3-5',
}
Explore for more options about the package resource here
Regarding the question of installing puppet modules, have a look here. Official doc is your friend :)
Personally, I just copy-paste the module directory manually in a git repo which I use to maintain my puppet code.
Related
I code haskell of poor quality in december each year. This year my environment is broken for some reason.
When I try to run my old scripts with
runhaskell .\myCode.hs
I get
Could not find module `Data.List.Split'
Use -v to see a list of the files searched for.
This question has a comment in one of the answers:
Maybe he doesn't even use a .cabal or .yaml file and only wants to write a stand-alone Haskell script for runhaskell.
That is exactly what I'm after, but the comment thread does not provide an answer. It worked 2016-2018 and I do not remember this issue, and I've never had the setup that is written about here or here("hidden modules").
Anyone have an idea how to fix this?
Edit: I tried the guide here which says to download the package, extract it and run:
runhaskell Setup configure
runhaskell Setup build
runhaskell Setup install
But I just get an error which says:
$ runhaskell Setup configure
Configuring split-0.2.3.3...
Setup: Encountered missing dependencies:
base <4.12
And I do have a Haskell\8.6.3\lib\base-4.12.0.0 in the installation.
Data.List.Split is not part of "base", the core libraries that are distributed with Haskell. It is part of an external package named "split". If you want to use it, you must get that package somehow. This is typically done with cabal or stack. Perhaps there is a way to do this that runhaskell understands; I don't know anything about runhaskell.
Ok so after following the instructions to do things manually, I double checked that I had the latest split package. The web page says the package requires base (<4.14) but it still complains Setup: Encountered missing dependencies: base <4.12 when I try to run runhaskell Setup configure
But then after I've tried and failed to install an older 'base', seemed like a long shot anyway, I simply followed the 'Installing packages using cabal' part of the guide.
cabal update
cabal install split
I ran those two commands and ignored the warnings that it was part of the legacy v1 cabal usage. It worked and it installed split so the runhaskell command could access it.
For testing, I have installed two instances of Ubuntu server 18.04 on VirtualBox. I then installed one with Puppet-server 6.1.0 and one with Puppet-agent 6.1.0, as per the documentation at Puppetlabs for version 6.1. Foreman is not installed.
After registering my agent at the puppetserver and signing the certificate, starting a puppet-run (sudo /opt/puppetlabs/bin/puppet agent --test) fails with the following error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node puppetagent.fritz.box: Exception while executing '/etc/puppetlabs/puppet/node.rb': Cannot run program "/etc/puppetlabs/puppet/node.rb" (in directory "."): error=2, No such file or directory
I was dumbstruck to find that the script /etc/puppetlabs/puppet/node.rb was indeed missing and was also not included in the packages of puppetserver, puppet-agent or facter (sudo dpkg-query -L ...).
Googling for it, I only found a script of the same name that belonged to Foreman.
The file does also not seem to be present in the puppetserver source-code at github.
Is anyone able to shed some light on this?
Your server configuration seems to be set up to specify use of an external node classifier. This is optional: Puppet does not require an ENC and does not provide one by default. That's part of what makes them "external". If you obtained the result you describe straight out of the box then it probably reflects a packaging flaw that you should report.
In the meantime, you should be able to update the configuration to disable use of an ENC by changing the value of the node_terminus setting to plain. Alternatively, you should be able to just delete both node_terminus and external_nodes from your configuration, because the default for the former is plain.
Tagging on to John's answer, your configuration is probably configured to talk to the Foreman. If you didn't write it yourself or copy it from somewhere and you're sure you don't have any Foreman packages installed, then it's definitely a packaging error that you should report.
That said, puppet repos are almost always the right answer rather than distro packages.
This post summarize my painful but finally successful (just by chance) way to build own conda package for the
netgen meshing tool with Python interface. I found the recipe for the netgen build due to tpaviot.
After cloning the repository into 'netgen-conda' folder I ran:
conda build netgen-conda/netgen-6.2-dev
Which reports "Unsatisfiable dependencies": 'oce', 'gcc-5', 'binutils'.
So I tried to install these packages myself. Unfortunately the documentation do not emphasize the important fact that 'conda build' use its own temporary environment so it doesn't matter what you have installed (see). Nevertheless even installing 'gcc-5' together with 'binutils' manually turns out to be nearly impossible.
Hint for other newbies: Lot of my problems disappear after I learned details about channels.
First try was installing 'gcc-5' with 'binutils' from the 'salford_systems' channel suggested by anaconda:
conda install -c salford_systems binutils gcc-5
But it results in:
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'salford_systems::gcc-5-5.3.0-0'.
LinkError: post-link script failed for package salford_systems::gcc-5-5.3.0-0
running your command again with-vwill provide additional information
location of failed script: /home/jb/miniconda3/envs/test/bin/.gcc-5-post-link.sh
Using verbose output ('-v') provides no more info. I was also confused by the fact that the script does not exist on the given path (probably automatically deleted).
With current experience I admit that the reason of problem can be dug out from the '-vv' output (reported issue). After some trying I found that only way to
install both is to first install 'gcc-5' into a clean environment and then install 'binutils'. Since 'conda build' installs everything
from scratch and there is no way to specify order of installed packages I was stuck.
Another issue that puzzled me is the 'conda build' long prefix hack. For unknown reason they use extremely long prefix for an auxiliary folder
which result in various kind of issues. I have faced to three such problems:
As is usual today, I have encrypted HOME causing a known issue.
Using a workaround '--croot /tmp' prevents creating the hard links from '/tmp' into 'HOME/miniconda3' since they are on different filesystems.
There is a fallback to use the copy. I even thought that the fallback doesn't work for a while, but it worked, just making the build running longer.
Trying to install 'gcc' (4.x) from 'default' channel complained about too short prefix. So ultimate workaroud was to set the length of the prefix manually
'--prefix-length 70'.
Finally, I found that the dependency on 'binutils' is not necessary and successfully build the package with:
conda build --prefix-length 70 -c salford_systems -c conda-forge -c dlr-sc netgen-conda/netgen-6.2-dev
Summary (of open questions):
Conda channels introduce a new kind of dependency hell already forgotten when using 'apt-get'. Is there a way to figure out what is a canonical channel for a package.
Does anyone succeed to build with combination 'gcc-5' and 'binutils'?
There is still lack of documentation about internal conda mechanisms and error messages do not provide clue to the problem.
Conda-build use a problematic prefix hack and lack ability to control order of installed packages. Does anybody know the reason for this hack?
I'm using vagrant shell provisioning here.
I've installed on my vm Node.js along with many other packages.
I want to avoid running parts in my provisioning script when I don't need them.
For example - I already successfully installed via my script Node.js & nginx, so when I want to add additional packages like mysql or redis, I want to add it to the script, I want to run the script to test that it runs properly, but I DO NOT want to re-install Node.js or nginx again...
I need a simple conditional statement that would detect if a package is already installed, and install it only if it is not already installed.
Is there a generic check or will it be different from package to package?
Thanks
Ajar
dpkg -s <pkg-name> 2>/dev/null >/dev/null || sudo apt-get -y install <pkg-name>
This should be what you're looking for.
What's going on here:
This is a conditional assignment of the form <condition> && <value if true> || <value if false>
The first part of the expression uses dpkg to check to if the package is installed, suppressing the output. The second part is evaluated if the condition returns false. The "true" case is omitted.
This dependes on the Linux distribution you are using. Usually, a package manager comes with some kind of mechanism to skipp already installed packages.
For Ubuntu, this is built in - running apt-get install nodejs with Node.js already installed will not reinstall it; it will skip the target (unless there is new version available)
For ArchLinux, you can add run pacman -Sy node --needed to skip already installed packages.
A platform-independent mechanism would be to check if the executable (or any other known file for that package) exists. In Bash, you can do:
which node > /dev/null && echo "Yup, this is installed"
(the > /dev/null part supresses which's output - it prints the path where the found executable resides; we do not care about that, we only want to know if it is installed)
If you want to avoid writing custom Bash scripts for such basic checks I can recommend that you configure your boxes with tools dedicated for exactly what you are trying to achieve. The usual suspects here are:
Ansible
Puppet
Chef
CFEngine
All of these are supported by Vagrant so integrating them should not be a problem. You can find detailed guides on integrating these into your existing Vagrant recipe here.
PS. For a simple exapmle you can check out my Ansible provisioning recipe for Banana Pi machine running ArchLinux (note: it does not really follow best practices, but it might be a good starting point). There are many examples available online, check them out, too.
I'm trying to do the initial work to get our dev shop to start using vagrant + puppet during development. At this stage in my puppet manifest development, I need to install several RPMs that are available via an internal http server (not a repo) with very specific flags ('--nodeps').
So, here's an example of what I need to install:
http://1.2.3.4/bar/package1.rpm
http://1.2.3.4/bar/package2.rpm
http://1.2.3.4/bar/package3.rpm
I would normally install them in this way:
rpm --install --nodeps ${rpm_uri}
I would like to be able to do something like this
$custom_rpms = [
'http://1.2.3.4/bar/package1.rpm',
'http://1.2.3.4/bar/package2.rpm',
'http://1.2.3.4/bar/package3.rpm',
]
# edit: just realized I was instantiating the parameterized
# class wrong. :)
class { 'custom_package': package_file => $custom_rpms }
With this module
# modules/company_packages/manifests/init.pp
define company_package($package_file) {
exec { "/bin/rpm --install --nodeps ${package_file} --nodeps" }
}
But, I'm not sure if that's right. Can some of you puppet masters (no pun intended) school me on how this should be done?
You may have already worked around this by now, but if not.
Using a repository is the preferred method as it will autoresolve all the dependancies, but it that's not available you can try the following. (I'm using epel as an example rpm)
package {"epel-release":
provider=>rpm,
ensure=>installed,
install_options => ['--nodeps'],
source=>"http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm",
}
It used to be that 'install_options ' was only supported in windows.
It appears that it is now supported in linux.
If there is sequence that would be helpful, add "require=Package["package3.rpm"]," to sequence.
Answered by Randm over irc.freenode.net#puppet
Create or use an existing repo and install them with yum so that it resolves the dependencies for you.