Puppet not recognising my module - freebsd

I am trying to create a custom provider for package but for some reasons I keep on getting
err: Could not run Puppet configuration client: Parameter provider
failed: Invalid package provider 'piprs' at
/usr/local/src/ops/services/puppet/modules/test/manifests/init.pp:5
I have added pluginsync=true in puppet.conf in both client and server. I have created the following rb file in module/test/lib/puppet/provider/package/piprs.rb. I am basically trying to create a custom provider for package resource type
#require 'puppet/provider/package'
Puppet::Type.type(:package).provide(:piprs,
:parent => ::Puppet::Provider::Package) do
commands : pip => "/usr/local/bin/pip"
desc "Python packages via `pip`."
def create
pip "freeze"
end
def destroy
end
def exists?
end
end
In the puppet.conf, there is the following source attribute
pluginsource = puppet://puppet/plugins
I am not sure what it is. If you need anymore details, please do post a comment.

First things first - you do realize there is already a Python pip provider in core?
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/package/pip.rb
If that isn't what you want - then lets move on ...
For starters - try your module without a Puppet master - this is going to be better for development anyway. You need to make sure Ruby can find the library path:
export RUBYLIB=<path_to_module>/lib
Then, try writing a small test in a .pp file:
package { "mypackage": provider => "piprs" }
And run it locally:
puppet apply mytest.pp
This will rule out a code bug in your provider versus a plugin sync issue.
I notice there is a space between the colon and the command - that isn't your problem is it?
commands : pip => "/usr/local/bin/pip"
If you can get this working without a puppetmaster, your problem is sync related.
There are a couple of things that can go wrong - make sure the file is sync'd properly on the client:
ls /var/lib/puppet/lib/puppet/provider/package
You should see the piprs.rb file there. If it is, you may need to make sure your libdir is set correctly:
puppet --configprint libdir
This should point to /var/lib/puppet/lib in most cases.

Related

Puppet noop When Executable does not exist yet

The following is a simplified manifest I am running:
package {'ruby2.4':
ensure => installed
}
exec { "gem2.4_install_bundler":
command => "/usr/bin/gem2.4 install bundler",
require => Package['ruby2.4']
}
Puppet apply runs this manifest correctly i.e
installs ruby2.4 package (which includes gem2.4)
Installs bundler using gem2.4
However, puppet apply --noop FAILS because puppet cannot find the executable '/usr/bin/gem2.4' because ruby2.4 is not installed with --noop.
My question is if there is a standard way to test a scenario like this with puppet apply --noop? To validate that my puppet manifest is executing correctly?
It occurs to me that I may have to parse the output and validate the order of the executions. If this is the case, is there a standard way/tool for this?
A last resort is a very basic check that the puppet at least runs, which can be determined with the --detailed-exitcodes option. (a code different to 1).
Thank you in advance
rspec-puppet is the standard tool for that level of verification. It can build a catalog from the manifest (e.g. for a class, defined type, or host) and then you can write tests to verify the contents.
In your case you could verify that the package resource exists, that the exec resource exists, and verify the ordering between them. This would be just as effective as running the agent with --noop mode and parsing the output - but easier and cheaper to run.
rspec-puppet works best with modules, so assuming you follow the setup for your module from the website (adding rspec-puppet to your Gemfile, running rspec-puppet-init), and let's say this is in a class called ruby24, a simple spec in spec/classes/ruby24_spec.rb would be:
require 'spec_helper'
describe 'ruby24' do
it { is_expected.to compile.with_all_deps }
it { is_expected.to contain_package('ruby2.4').with_ensure('installed') }
it { is_expected.to contain_exec('gem2.4_install_bundler').with_command('/usr/bin/gem2.4 install bundler') }
it { is_expected.to contain_exec('gem2.4_install_bundler').that_requires('Package[ruby2.4]') }
end

puppet - How to debug and test to see if your module is working properly

I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.

Puppet error when using classes

I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules

Masterless puppet with hiera

I'm trying to figure out masterless environment with puppet. I'm using this link to install the newest version of Puppet on Ubuntu.
I'm using this repository https://github.com/szymonrychu/puppet-masterless and running the script: modules/os/files/puppet.sh.
It downloads current Puppet repository to the /opt/puppet directory and then runs the code specified in it. (It sets cronjob pointing to the script, so it will run every hour)
After first run the hiera env is prepared (hiera.yaml) and deployed. From that point the code should start connect to hiera database, but it's not happening.
Most probably there is an issue in modules/os/files/hiera.yaml or in manifests/site.pp, but after several days of struggling I can't get it to work.
Ok! I know what was broken :)
first missing part in common.yaml:
(...)
classes:
- os
os::version: 'ugabuga'
(...)
second mistake in modules/os/manifests/init.pp:
class os (
$version = 'v0.0.0'
){ (...) }
instead of:
class os {
$version = 'v0.0.0'
(...)
}
and finally, the code should be included in manifests/site.pp like this:
node default {
hiera_include('classes')
include os
}
And that's it! But it wasn't trivial - at least for me. Documentation isn't that specific in that case and there are no complex examples about this.

Whats the best approach to create a repo of the installers to be used for installing and upgrading in the puppet managed nodes

Lets take the example, I am having a jboss-4.2.3 installers as a .tar file. In general to install jboss, i ll
1. untar the jboss-4.2.3 into a prefefined folder (opt/server/jbossas/) into multiple servers
2. untar the openjdk into a preferined path (/opt/software/java)set the path in the bash.profile
3. Create server profile in the place where jboss is installed
4. Start the server.
Lets say that I have to do this in 16 nodes (servers).
Now, I should store the jboss and openjdk installers at a central location and it should be transferred to the nodes before the 1st step can begin.
I wrote the manifest to perform the requirements form 1 to 4. But not sure how can I automate the transfer of the installers from a central repo. I am not worried about the type of central repo. It can be a ftp or puppet or anything else.
Please help me. I was going through filebucket. Will this help or should i write a manifest to get this file from a ftp server?
How to create a file repo which can be referred in puppet manifests?
I am not sure about your exact problem, but you can have a look at this and get an idea...
In most of the usage the files are transferred from the puppetmaster to the clients. If you have your policies defined in a module to untar and install the packages, e.g. module name jboss, you can keep the tarball in these kind of structure in the puppet master and run puppet agent from puppet client :
/etc/puppet/module/jboss/files/jboss_pkg.tar
Your policy for your clients should then say something like the following in the :
In e.g,
/etc/puppet/modules/jboss/manifests/init.pp
class jboss {
file { '/tmp/installation/jboss_pkg.tar' :
source => "puppet:///modules/jboss/jboss_pkg.tar",
}
#You can then right a small script that will execute all the installation process. You can use 'exec' in puppet to do that.
exec { 'install_jboss' :
command => "/path/to/install_jboss.sh",
require => File["/tmp/installation/jboss_pkg.tar"],
onlyif => "/check/that/it/is/not/installed/already",
}
## and write other execs to start the server or enable services etc...
}
# In site.pp
node 'client.mytest.org' {
include jboss
}
The general solution to provide installers to Puppet is to set up your own package repository (rather than just a file repo).
http://www.techrepublic.com/blog/opensource/create-your-own-yum-repository/609
Then, you can use Puppet's built in package resource for easy install/upgrade/uninstall
http://docs.puppetlabs.com/references/latest/type.html#package
The following projects seem to provide a rpm/deb version of JBoss that you can publish to your repository
https://github.com/floreal/jboss-deb-package
http://code.google.com/p/jboss-rpm/

Resources