I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules
Related
I'm hoping to use puppet to manage my rc files (i.e. sharing configuration files between work and home). I keep my rc files in a subversion respository. Some machines, I have sudo privileges on, some I don't. And none of the machines are on the same network.
I have a simple puppet file:
class bashResources ( $home, $svn ) {
file { "$home/.bash" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d/bashrc" :
ensure => present,
target => "$home/$svn/rc/bashrc",
}
}
node 'ubuntuwgu290' {
class { 'bashResources':
home => '/home/dshaw',
svn => 'mysvn',
}
}
I have a simple config file that I'm using to squelch some errors:
[main]
report=false
When I run puppet, I get an annoying error about not being able to execute chown:
dshaw#ubuntuwgu290:~/mysvn/rc$ puppet apply rc.pp --config ./puppet.conf
Notice: Compiled catalog for ubuntuwgu290.maplesoft.com in environment production in 0.12 seconds
Error: Failed to apply catalog: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/state.yaml20170316-894-rzkggd
Error: Could not save last run local report: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/last_run_summary.yaml20170316-894-l9embs
I have attempted to squelch the error by adding reports=none to my config file, but it has not been effective.
How can I squelch these errors? Alternatively, is there a more lightwieght tool for managing rc files?
Thanks,
Derek
The error is related to Puppet trying to manage its own metadata in /home/dshaw/.puppet, not any of the files enrolled in Puppet's catalog for management. This is not normally a problem, even when you run Puppet as an ordinary user. In fact, supporting this sort of thing is one of the reasons why per-user Puppet metadata exists.
The files that Puppet is trying to chown do not already belong to you (else Puppet would not be trying to chown them), but they should belong to you, where "you" means the puppet process's (e)UID and (e)GID. You might be able to solve the problem by just removing Puppet's state directory, and letting it rebuild it on the next run. Alternatively, you might be able to perform or arrange for a manual chown such as Puppet is trying to perform.
On the other hand, it's unclear how this situation arose in the first place, and some of the mechanisms I can imagine would render those suggestions ineffective.
I have set up Puppet master/agent in Oracle VirtualBox using Vagrant and installed netdev_stdlib on both master and agent according to instruction in the README.
I have set up module path to /etc/puppet/modules/netdev_stdlib where standard library stdlib also exists.
The master node is puppet.example.com and agent is node01.example.com.
My manifest file is as follows:
node default {
file { "/tmp/example_ip": ensure => present }
include stdlib # No error on this line
# include netdev_stdlib # Uncomment this line will cause error
netdev_device { $hostname: }
}
However, when I run puppet agent -t on the client I got
Error: Could not retrieve catalog from remote server: Error 400 on SERVER:
Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid
resource type netdev_device at /etc/puppet/manifests/site.pp:17 on node
node01.example.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I tried to use include netdev_stdlib in the manifest file site.pp, but Puppet couldn't find the class netdev_stdlib. However, include stdlib is fine.
I tried to use include netdev_stdlib in manifest file site.pp but Puppet couldn't find the class netdev_stdlib, but include stdlib is fine.
The netdev_stdlib module doesn't provide a class that can simply be included. It's more of a programming framework for writing new Puppet types to manage network devices in Ruby.
You should either use the module to write a new network device module of your own for interfacing to some new type of device, per the README, or remove it and don't include the class.
If you're trying to manage a network device, you should look for an existing module that supports that type of device - it may then use this module as a dependency instead. If you're not managing a network device, I don't think you should be trying to use the module.
Note that most module READMEs will indicate class names that can be included; not all modules will contain Puppet classes.
Miss { in end of file type:
change from:
file { "/tmp/example_ip": ensure => present,
to
file { "/tmp/example_ip": ensure => present, }
I am installing nutch2.2.1 on my centOS virtual machine and getting an error injecting the seed urls(directory name). I used this command:
/usr/share/apache-nutch-2.1/src/bin/nutch inject root/apache-nutch-2.1/src/testresources/testcrawl urls
And i got an error :
Error: Could not find or load main class org.apache.nutch.crawl.InjectorJob
Similarly, for the command
/usr/share/apache-nutch-2.1/src/bin/nutch readdb
gives me an error:
Error: Could not find or load main class org.apache.nutch.crawl.WebTableReader
What should i do to fix these errors?
I am following the tutorial from: http://wiki.apache.org/nutch/Nutch2Tutorial and followed the same steps as suggested.
Also my query also revolves around setting the path for ant. Every time i open a new session i have to set the ANT_HOME and PATH environment variable manually. And then they work all fine. Same is the case with setting JAVA_HOME.
You should go to $NUTCH_HOME/runtime/local/ directory to run the the commands.
Lets take the example, I am having a jboss-4.2.3 installers as a .tar file. In general to install jboss, i ll
1. untar the jboss-4.2.3 into a prefefined folder (opt/server/jbossas/) into multiple servers
2. untar the openjdk into a preferined path (/opt/software/java)set the path in the bash.profile
3. Create server profile in the place where jboss is installed
4. Start the server.
Lets say that I have to do this in 16 nodes (servers).
Now, I should store the jboss and openjdk installers at a central location and it should be transferred to the nodes before the 1st step can begin.
I wrote the manifest to perform the requirements form 1 to 4. But not sure how can I automate the transfer of the installers from a central repo. I am not worried about the type of central repo. It can be a ftp or puppet or anything else.
Please help me. I was going through filebucket. Will this help or should i write a manifest to get this file from a ftp server?
How to create a file repo which can be referred in puppet manifests?
I am not sure about your exact problem, but you can have a look at this and get an idea...
In most of the usage the files are transferred from the puppetmaster to the clients. If you have your policies defined in a module to untar and install the packages, e.g. module name jboss, you can keep the tarball in these kind of structure in the puppet master and run puppet agent from puppet client :
/etc/puppet/module/jboss/files/jboss_pkg.tar
Your policy for your clients should then say something like the following in the :
In e.g,
/etc/puppet/modules/jboss/manifests/init.pp
class jboss {
file { '/tmp/installation/jboss_pkg.tar' :
source => "puppet:///modules/jboss/jboss_pkg.tar",
}
#You can then right a small script that will execute all the installation process. You can use 'exec' in puppet to do that.
exec { 'install_jboss' :
command => "/path/to/install_jboss.sh",
require => File["/tmp/installation/jboss_pkg.tar"],
onlyif => "/check/that/it/is/not/installed/already",
}
## and write other execs to start the server or enable services etc...
}
# In site.pp
node 'client.mytest.org' {
include jboss
}
The general solution to provide installers to Puppet is to set up your own package repository (rather than just a file repo).
http://www.techrepublic.com/blog/opensource/create-your-own-yum-repository/609
Then, you can use Puppet's built in package resource for easy install/upgrade/uninstall
http://docs.puppetlabs.com/references/latest/type.html#package
The following projects seem to provide a rpm/deb version of JBoss that you can publish to your repository
https://github.com/floreal/jboss-deb-package
http://code.google.com/p/jboss-rpm/
I am trying to create a custom provider for package but for some reasons I keep on getting
err: Could not run Puppet configuration client: Parameter provider
failed: Invalid package provider 'piprs' at
/usr/local/src/ops/services/puppet/modules/test/manifests/init.pp:5
I have added pluginsync=true in puppet.conf in both client and server. I have created the following rb file in module/test/lib/puppet/provider/package/piprs.rb. I am basically trying to create a custom provider for package resource type
#require 'puppet/provider/package'
Puppet::Type.type(:package).provide(:piprs,
:parent => ::Puppet::Provider::Package) do
commands : pip => "/usr/local/bin/pip"
desc "Python packages via `pip`."
def create
pip "freeze"
end
def destroy
end
def exists?
end
end
In the puppet.conf, there is the following source attribute
pluginsource = puppet://puppet/plugins
I am not sure what it is. If you need anymore details, please do post a comment.
First things first - you do realize there is already a Python pip provider in core?
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/package/pip.rb
If that isn't what you want - then lets move on ...
For starters - try your module without a Puppet master - this is going to be better for development anyway. You need to make sure Ruby can find the library path:
export RUBYLIB=<path_to_module>/lib
Then, try writing a small test in a .pp file:
package { "mypackage": provider => "piprs" }
And run it locally:
puppet apply mytest.pp
This will rule out a code bug in your provider versus a plugin sync issue.
I notice there is a space between the colon and the command - that isn't your problem is it?
commands : pip => "/usr/local/bin/pip"
If you can get this working without a puppetmaster, your problem is sync related.
There are a couple of things that can go wrong - make sure the file is sync'd properly on the client:
ls /var/lib/puppet/lib/puppet/provider/package
You should see the piprs.rb file there. If it is, you may need to make sure your libdir is set correctly:
puppet --configprint libdir
This should point to /var/lib/puppet/lib in most cases.