I'm developing a Ganglia recipe in Chef.
Is very simple, I build four different configurations files, I already tried to use as template, but to keep it simple, I build these configuration files.
This is my recipe:
return if tagged?('norun::ganglia')
case node[:platform]
when "ubuntu", "debian"
pkg = "ganglia-monitor"
when "redhat", "centos", "fedora"
pkg = "ganglia-gmond"
end
package "#{pkg}" do
action :install
end
cookbook_file "/etc/ganglia/gmond.conf" do
owner "root"
group "root"
mode "0644"
source "gmond/" + node['base']['dc'] + "/node/gmond.conf"
end
# Adding ganglia-gmond as service
service "gmond" do
supports :status => true,
:restart => true
action [ :enable, :start ]
end
And this is how my recipe is structured:
cookbooks/ganglia/
cookbooks/ganglia/files/default/gmond/* // I have others sub-folders here too
cookbooks/ganglia/files/default/gmond/diveo/node/gmond.conf
cookbooks/ganglia/recipes/default.rb
But when I tried to run my recipe, it gives the follow error:
[2013-05-14T14:23:38+00:00] FATAL: Chef::Exceptions::FileNotFound: cookbook_file[/etc/ganglia/gmond.conf] (ganglia::default line 25) had an error: Chef::Exceptions::FileNotFound: Cookbook 'ganglia' (0.1.0) does not contain a file at any of these locations:
files/centos-5.7/gmond/diveo/node/gmond.conf
files/centos/gmond/diveo/node/gmond.conf
files/default/gmond/diveo/node/gmond.conf
This cookbook _does_ contain: ['diveo/monitor/gmond.conf','diveo/node/gmond.conf','awsvir/monitor/gmond.conf','awsvir/node/gmond.conf','awssp/monitor/gmond.conf','awssp/node/gmond.conf','alog/monitor/gmond.conf','alog/node/gmond.conf']
Basically it says that I not have the file, but I do, in the right path, right ?
If node['base']['dc'] is a platform name, then cookbook_file statement should look like
cookbook_file "/etc/ganglia/gmond.conf" do
owner "root"
group "root"
mode "0644"
source "gmond.conf"
end
and structure of your conf files should be like that
cookbooks/ganglia/
cookbooks/ganglia/files/default/gmond.conf
cookbooks/ganglia/files/centos-5.7/gmond.conf
...
And a little advice - use template instead of cookbook_file. One day you'll want to add some parameters to your gmane.conf anyway.
Also, here is a cookbook_file doc page from opscode.com -
Related
I am trying to create a puppet manifest using inifile. This would be for a configuration file where I need to have the following format.
[safe]
directory = /home/foo
directory = /home/test
directory = /home/something
I know that there is a way to use directory1, and directory2 but I was wondering if there is a way to do it without changing the directory since it needs that specific attribute. This implementation is meant for puppet manifest.
Also, I was thinking puppetlabs/inifile module but if there is another option to achieve this would be great too.
Thanks for the help in advance
So far, I have an implementation like:
ini_setting { 'procedure cache size':
ensure => present,
path => '/var/lib/somethning/test.config',
section => 'safe',
setting => 'directory',
value => '/home/foo',
indent_char => "\t",
}
This is for each directory. The purpose for this implementation is to address the new git configuration for safe.repository in the recent update. My understanding is that for multiple directories, it adds a new value as directory = <directory> I don't believe that it likes directories separate by commas.
First I thought about file_line, but this is not idempotent for multi-line settings (you get problems when you run again). You can try:
Sample puppet code dir.pp
$safe_directories="directory = /home/foo
directory = /home/test
directory = /home/something"
notice "Testing\n${safe_directories}"
file { "/tmp/result.ini":
ensure => present,
content => template('/tmp/layout.erb'),
}
notice "Check /tmp/result.ini"
Sample template /tmp/layout.erb
[unsafe]
directory=/unsafe
[safe]
<%=#safe_directories%>
otherfield=secure
[header3]
nothing = here
Now run command from commandline
puppet apply dir.pp
I want to create a recipe for this https://github.com/kuscsik/streamfs in my new layer (meta-example) and include it to image.
My layer is added in bblayers.conf :
~/rdk/build-raspberrypi-rdk-hybrid$ bitbake-layers show-layers
**layer path priority**
**meta-example /home/xyz/rdk/build-raspberrypi-rdk-hybrid/meta-example 6**
This is my path to layer.conf and content in layer.conf:
~/rdk/build-raspberrypi-rdk-hybrid/meta-example/conf$ vi layer.conf
#We have a conf and classes directory, add to BBPATH
BBPATH .=":${LAYERDIR}"
#We have recipes-* directories, add to BBFILES
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb\
${LAYERDIR}/recipes-*/*/*.bbapend"
BBFILE_COLLECTIONS += "example"
BBFILE_PATTERN_example = "^${LAYERDIR}/"
BBFILE_PRIORITY_example = "6"
Then i have created a directory(example) inside meta-example which contains streamfs_git.bb with contents as shown below:
~/rdk/build-raspberrypi-rdk-hybrid/meta-example/example$ vi streamfs_git.bb
DESCRIPTION = "First recipe"
HOMEPAGE = "https://github.com/kuscsik/streamfs"
LICENSE = "LGPL-2.1"
LIC_FILES_CHKSUM = "file://LICENSE;md5=fc178bcd425090939a8b634d1d6a9594"
inherit cmake pkgconfig
SRC_URI = "git://github.com/kuscsik/streamfs"
SRCREV = "${AUTOREV}"
S = "${WORKDIR}/git"
Then i run this command :~/rdk/build-raspberrypi-rdk-hybrid/meta-example/example$ bitbake streamfs_git
It shows me this error
WARNING: No bb files matched BBFILE_PATTERN_example '^/home/xyz/rdk/build-raspberrypi-rdk-hybrid/meta-example/'
**ERROR: Nothing PROVIDES 'streamfs_git'
I have even tried bitbake streamfs_git.bb and bitbake streamfs also, all are giving same error.
How can I fix the error? Do I have to add something in my layer.conf or .bb file or is there an error in any of my steps?
I've noticed two potential errors right away within your BBFILES declaration. You have
BBFILES += "${LAYERDIR}/recipes-//.bb
${LAYERDIR}/recipes-//.bbapend"
The first issue being that BBFILES is looking for any recipes in AND ONLY in a directory called 'recipes-'. I'm assuming (I could be wrong) that your recipes directory isn't called 'recipes-'. To ensure any recipes within a folder are getting parsed, make sure your bbfiles properly points to the directory that it resides in. In this case, perhaps change 'recipes-' to 'recipes-[directory-name]', although you can also use wildcards here, so something like 'recipes-*' would work as well, giving you the ability to parse any recipes within any directory named 'recipes-[anything]'.
Additionally, you seem to have an extraneous '/' in that same line, and no declaration of your recipe name. It's generally good practice in yocto keep your recipes in a subfolder of a recipes-* directory, though it's only for organizational reasons. You're also only pointing to a recipes named '.bb'. You should have either a wildcard, or the exact recipe name here. Also, don't forget to separate your multilined variables with a \ at the end of each line. It should look something like:
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
Moving to the second line, it's the same issues with lack of wildcards/direct definitions, although there also seems to be a typo in 'bbapend', which is just in spelling, should be 'bbappend'.
Ultimately, you should end up with something like this
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
${LAYERDIR}/recipes-*/*/*.bbappend"
There may be more issues, though work on this stuff for now, and I'll help out more if this doesn't solve your issue.
I am trying to come up with a sane layout for my RPMs that follow this path structure
<repo_name>/<module_name>/<module_name>-0.0.0-<epoch>.<arch>.rpm
For example, this is a test path:
rpm-rhel7-dev/python-opstools/python-opstools-2.7.6-1.noarch.rpm
Anyone have any hints?
Related documentation
https://www.jfrog.com/confluence/display/RTF/Repository+Layouts
Cleared all packages from 'my-repo'
Created layout 'rpm-default'
Artifact Path Pattern:
[orgPath]/[module]-baseRev-[classifier].[ext]
Folder Integration Revision RegExp
.*
File Integration Revisino RegExp
.*
Once I did this and assigned this layout to my empty repo, I pushed to this path (Jenkins):
upload_spec = """{
"files": [
{
"pattern": "$RPM_ROOT/*.rpm",
"target": "$REPO_NAME/my-module/"
}
]
}"""
Where RPM root is your path to RPM/RPMs per documentation:
https://www.jfrog.com/confluence/display/RTF/RPM+Repositories
https://www.jfrog.com/confluence/display/RTF/Working+With+Pipeline+Jobs+in+Jenkins
https://www.jfrog.com/confluence/display/RTF/Using+File+Specs#UsingFileSpecs-UploadSpecSchema
The key here is to make sure you have a module ID after a push:
Module ID: python-opstools:python-opstools:2.8.0:1
After this, you should see versions to delete or manage when right click the module folder / repo root. Don't ask me yet how to fully deconstruct all the pieces fo the path pattern :P, instead, refer to the documentation:
https://www.jfrog.com/confluence/display/RTF/Repository+Layouts
This is most likely an anti-pattern, but I'd like to know nonetheless:
I need to extract a tgz which is in puppet and then move the contents somewhere else. Is it possible, in a puppet exec { }, to refer to the file where it is stored on disk?
For example, puppet is available at /usr/local/puppet, and the tgz file I need it in /usr/local/puppet/modules/components/files/file.tgz. In the exec { } can I do something like command => "/bin/cp $modules/components/files/file.tgz /somewhere_else" ? Or do I have to declare a file { source => "..." } block first?
Both approaches are correct if you run puppet with puppet apply.
In master-agent architecture using exec to copy file probably will not work at all.
In my opinion using file resource is more "puppet-like" but is has one significant drawback.
You can use:
file { '/some_path/somewhere_else':
source => '/usr/local/puppet/modules/components/files/file.tgz',
}
This will create file /some_path/somewhere_else with the same content as /usr/local/puppet/modules/components/files/file.tgz (it will make a copy of the original file).
But if /some_path doesn't not exist in the file system, the command will fail.
If you are working with tgz files you can also consider using some of the archive puppet modules e.g gini.
UPDATE:
I can propose two approaches:
Use puppet file server to serve files (or define module path for old puppet versions). Next just use it e.g:
file { '/some_path/somewhere_else':
source => "puppet:///modules/components/file.tgz',
}
Define custom facter fact 1, 2 that points path in your filesystem containing required files. E.g:
file { '/some_path/somewhere_else':
source => "${::my_custom_fact}/some_path/file.tgz',
}
I do not think that any of the core facts might be useful for you.
I'm installing a package from a module (Nginx in this specific case) and would like to include a configuration file from outside of the module, i.e. from a top level files directory parallel to the top level manifests directory. I don't see any way to source the file though without including it in a module or in my current Vagrant environment referring to the absolute local path.
Does Puppet allow for sourcing files from outside of modules as described in the documentation?
if I understand your question correctly, you can.
In your module a simple code like this
file { '/path/to/file':
ensure => present,
source => [
"puppet:///files/${fqdn}/path/to/file",
"puppet:///files/${hostgroup}/path/to/file",
"puppet:///files/${domain}/path/to/file",
"puppet:///files/global/path/to/file",
],
}
will do the job. The /path/to/file will be sourced using a file located in the "files" Puppet share.
(in the example above, it search in 4 different locations).
update maybe you're talking about a directory to store files which is not shared by Puppet fileserver (look at http://docs.puppetlabs.com/guides/file_serving.html), and in this case you can't i think, Vagrant or not, but you can add it to your Puppet fileserver to do it. I thinks it's the best (and maybe only) way to do it.
If you have a number of Vagrant VMs you can simply store files within your Vagrant project directory (containing your VagrantFile).
This directory is usually available to all VMs as /vagrant within the VM on creation.
If you want other directories on your computer to be available to your VMs just add the following to your VagrantFile
# see http://docs.vagrantup.com/v1/docs/config/vm/share_folder.html
config.vm.share_folder "v-packages", "/vagrant_packages", "../../dpkg"
Then to use the files within puppet you can simply treat them as local files to the VM
# bad example, bub basically use 'source => 'file:///vagrant/foo/bar'
file { '/opt/cassandra':
ensure => directory,
replace => true,
purge => true,
recurse => true,
source => 'file:///vagrant/conf/dist/apache-cassandra-1.2.0',
}
This is probably only wise to do if you only using local puppet manifests/modules.
Probably too late to help bennylope, but for others who happen across this question, as I did before figuring it out for myself ...
Include stuff like this in your Vagrantfile ...
GUEST_PROVISIONER_CONFDIR = "/example/destination/path"
HOST_PROVISIONER_CONFDIR = "/example/source/path"
config.vm.synced_folder HOST_PROVISIONER_CONFIDIR, GUEST_PROVISIONER_CONFDIR
puppet.options = "--fileserverconfig='#{GUEST_PROVISIONER_CONFDIR}/fileserver.conf'"
Then make sure /example/source/path contains the referenced fileserver.conf file. It should look something like ...
[foo]
path /example/destination/path
allow *
Now, assuming example-file.txt exists in /example/source/path, the following will work in your manifests:
source => "puppet:///foo/example-file.txt",
See:
Puppet configuration reference entry for fileserverconfig
Serving Files From Custom Mount Points