I am working on trying to copying files from my puppetmaster to my webserver, which is not working.
On my puppetmaster I have edited the file fileserver.conf and added:
[extra_files]
path /etc/puppet/files
allow *
After that, restarted puppetmaster and puppet on the puppetmaster.
I have an test.txt in the /etc/puppet/files folder
On the webserver I have this apache2.pp script
file { "/test.txt":
mode => "600",
owner => 'root',
group => 'root',
ensure => present,
source => "puppet:///files/test.txt",
}
I am receiving this error, which I am really unsure how to solve:
**Error: /Stage[main]/Main/File[/test.txt]: Could not evaluate: Could not retrieve information from environment production source(s)**
Hope someone can maybe help me with some steps to troubleshoot what is going wrong.
According to description in fileserver.conf:
# [extra_files]
# path /etc/puppet/files
# allow *
#
# In the example above, anything in /etc/puppet/files/<file name> would be
# available to authenticated nodes at puppet:///extra_files/<file name>.
#
change
source => "puppet:///files/test.txt",
to
source => "puppet:///extra_files/test.txt",
Don't use file server mounts unless you have a very good reason to do so.
Instead, create a module that holds the file you need to sync, such as module webserver
mkdir -p /etc/puppet/modules/webserver/files
In your file resource, reference the file as follows:
source => 'puppet:///modules/webserver/test.txt'
Be careful not to include files in the URL of files that are retrieved from within modules.
Related
I'm running a puppet code that create a file with text, it's working when I'm running it locally (with puppet apply <.pp file> on the same machine) but not working when I'm running the code on an agent from a puppet master server (with puppet agent -t on the manifests directory) my code:
node default {
file { '/test544/newdirha1': #the path of the new file
ensure => 'present',
content => 'this is the content', #this text will be inside the file
owner => 'root',
group => 'root',
mode => '0644',
}
}
The problem is that the master does not read or process your manifest file at all.
Puppet 3.8 is obsolete and unsupported. The latest us Puppet 6.2, and since you're just getting going I recommend starting there. The expected layout and behavior of that and other more recent Puppet versions differ in some import and relevant ways, but in Puppet 3, the starting point for the master's processing is a single file, the "site manifest", which by default is /etc/puppet/manifests/site.pp.
Your master having neither a site manifest nor an external node classifier to rely upon, it will not assign any classes or resources to any node. It will generate only empty catalogs, which is exactly what you observe. Your manifest woot3.pp is ignored. The simplest and most direct way to solve the problem would be to rename woot3.pp to site.pp.
I'm hoping to use puppet to manage my rc files (i.e. sharing configuration files between work and home). I keep my rc files in a subversion respository. Some machines, I have sudo privileges on, some I don't. And none of the machines are on the same network.
I have a simple puppet file:
class bashResources ( $home, $svn ) {
file { "$home/.bash" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d/bashrc" :
ensure => present,
target => "$home/$svn/rc/bashrc",
}
}
node 'ubuntuwgu290' {
class { 'bashResources':
home => '/home/dshaw',
svn => 'mysvn',
}
}
I have a simple config file that I'm using to squelch some errors:
[main]
report=false
When I run puppet, I get an annoying error about not being able to execute chown:
dshaw#ubuntuwgu290:~/mysvn/rc$ puppet apply rc.pp --config ./puppet.conf
Notice: Compiled catalog for ubuntuwgu290.maplesoft.com in environment production in 0.12 seconds
Error: Failed to apply catalog: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/state.yaml20170316-894-rzkggd
Error: Could not save last run local report: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/last_run_summary.yaml20170316-894-l9embs
I have attempted to squelch the error by adding reports=none to my config file, but it has not been effective.
How can I squelch these errors? Alternatively, is there a more lightwieght tool for managing rc files?
Thanks,
Derek
The error is related to Puppet trying to manage its own metadata in /home/dshaw/.puppet, not any of the files enrolled in Puppet's catalog for management. This is not normally a problem, even when you run Puppet as an ordinary user. In fact, supporting this sort of thing is one of the reasons why per-user Puppet metadata exists.
The files that Puppet is trying to chown do not already belong to you (else Puppet would not be trying to chown them), but they should belong to you, where "you" means the puppet process's (e)UID and (e)GID. You might be able to solve the problem by just removing Puppet's state directory, and letting it rebuild it on the next run. Alternatively, you might be able to perform or arrange for a manual chown such as Puppet is trying to perform.
On the other hand, it's unclear how this situation arose in the first place, and some of the mechanisms I can imagine would render those suggestions ineffective.
I am playing around with puppet and am trying to copy a file from my local directory (my laptop) on to my puppet agent. I have two VM's running, one is puppet master and one is puppet agent. I looked up at this answer here but it seems like it was an older version on puppet. I am running puppet 3.4.3 . I have gone through the pro puppet book and the puppet tutorials but find them way to confusing (the former having very glaring typos). It would be BIG help if someone helped me out with the process in simple steps. This is what I have till now.
I created a folder named my_module in /etc/puppet/.
In /etc/puppet/my_module is created two folders files, manifests and a file init.pp .
Init.pp looks like this:
class myfile {
file { "/home/me/myfolder/file.py":
mode => "0440",
owner => 'root',
group => 'root',
source => 'puppet:///modules/module_name/datas.xls',
}
}
I then copied the file file.py to the files folder I created above. I am unsure how to proceed after this step. Any help?
please read this documentation regarding creating your own modules. The module you created is in the wrong location right now. Should be /etc/puppet/modules or wherever the modulepath in /etc/puppet/puppet.conf points to on the puppet master.
The file given with source => 'puppet:///modules/module_name/datas.xls' is the one which will be placed in /home/me/myfolder/file.py on the client where you run the puppet agent -t command to rollout your changes.
Another good source for examples how to use the standard builtin puppet features is Type Reference of puppetlabs.
Some disclosure:
I'm using a master/agent setup in which I own the agent but do not have permission to the master console. The puppetmaster is git-backed, and I control the source for the module(s) in question.
I have 2 relevant modules for my question. One of them, which appears to work just fine, ensures autofs is installed and has 2 file resources for auto.master and a custom auto.home to mount home directories.
#auto.home
#this file is used by auto.master to automount home directories
#from the nfs cluster when a user logs in.
* -fstype=nfs,rw,nosuid,soft <IPaddress>:/homedirs/&
In the module to add home directories, I'm creating users and deploying their public ssh keys via a file resource. This module "works" on systems when I comment out the Class dependency and I'm not mounting /home to NFS, and it *sometimes works when I'm deploying it as-is over NFS.
define local_user(
$fullname,
$username = $title,
$userid,
$gid = 9999,
$homedir_mode = 0700
) {
$white_gid = $gid
user { $username:
ensure => present,
comment => $fullname,
gid => $white_gid,
uid => $userid,
home => $homedir,
require => Group[ "white" ],
}
exec { "chage -M 99999 ${username}":
command => "chage -M 99999 ${username}",
path => "/bin:/sbin:/usr/bin:/usr/sbin",
# chage(1) only works on local users, not on LDAP users,
# so make sure this is a local user before we try to
# change their password expiration.
onlyif => "grep -q '^${username}:' /etc/passwd",
subscribe => User[ $username ],
refreshonly => true,
}
file { $homedir:
ensure => directory,
owner => $username,
group => $white_gid,
mode => $homedir_mode,
require => User[ $username ],
}
file { "$homedir/.ssh":
ensure => directory,
owner => $username,
group => $white_gid,
mode => 0700,
require => File[ "$homedir" ],
}
file { "$homedir/.ssh/authorized_keys":
ensure => present,
owner => $username,
group => $white_gid,
mode => 0600,
source => "puppet:///modules/ssh_keys/${username}_authorized_keys",
require => File["$homedir/.ssh"],
}
}
class ssh_keys {
group { "white":
ensure => present,
gid => 9999,
require => Class["nfs_homedirs"],
}
#### add users below this line
local_user { "userA" : fullname => "userA", userid => "123" }
Some things I'm puzzled by and could use expertise with:
In order for the NFS home directories to work at all, I had to run the module on a machine to create the users locally, then mount the root directory of the NFS mount for home directories and create those user's folders owned by their uid/gid for the autofs to actually work when they log in.
When the module fails to "work" against the NFS-mounted home directories, the error is 'Permission denied' when it tries to create the home folder. I've tried no_root_squash to combat the error, but to no avail. I have tried running the agent as root, as not-root via sudo, as not-root at all, etc.
Error:
/Stage[main]/Ssh_keys/Local_user[userA]/File[/home/userA]/ensure:
change from absent to directory failed: Could not set 'directory' on ensure:
Permission denied - /home/userA at
80:/app/puppet/conf/environments/puppet_dev/modules/ssh_keys/manifests/init.pp
It's seemingly harmless to put ensure => present statements on these directories and file resources. They're technically already created on the NFS share, but the way autofs seems to work is that it won't actually "mount" that user's share until they login. It's not my expertise, but that's what I experience. When this module does run successfully, every user's home directory it creates shows as a mount in the df output.
I suspect that there's something on the machine itself that's preventing this module from working the way it should. Knowing that there are probably 500 things that I could diff between a machine where this module runs clean and one where it doesn't, what are some places I should investigate?
Any assistance would be greatly appreciated.
The way auto.home works is to mount the directory when the user logs in. If the user hasn't logged in, no mount exists -- and thus your directory/file resources fail.
Personally I wouldn't try creating home directories over an nfs mount. Plus you don't want multiple servers trying to manage the same physical resources. Split this out to run only on the NFS server if possible and run all your file resources related to the home directories there. Have the nfs clients just ensure nfs is configured and the local user accounts exist.
If you can't run puppet on the NFS server, pick 1 server to mount it as a regular mount -- i.e. mount the root of the home dirs section so they are all visible. Have no_root_squash set also. Then you should be able to have puppet create the directories.
Also the ssh_authorized_key resource is handy. I use it often.
It sounds to me like selinux is being enforced, which would cause a permission denied as described even if you have the right user/uid owning the directories. If you have selinux enforced, then you'll want to check to see if using nfs_home_dirs is allowed. First, check by running:
getsebool use_nfs_home_dirs
If it comes back as use_nfs_home_dirs --> off, then you can either manually correct this using setsebool -P use_nfs_home_dirs 1, or you can use puppet to manage this as well:
include selinux
selinux::boolean {'use_nfs_home_dirs':
ensure => 'on',
}
I am trying to do the following using Puppet on an Ubuntu 10.04:
Copy a file that I have to a specific directory which will be owned by a specific user / group that does not exists yet since the package has not been installed
Install the package where it will not remove the directory and file that I created
To accomplish item #1, I basically tell Puppet to create a user and group first before copying the file. But the problem is that if I do not give a specific uid for Puppet, it will randomly pick a number like a number for user and not a number for system package.
So, how do I tell Puppet to choose a uid anything more than 1000?
If this is not possible, how do I tell Puppet not to start the package when it installs it. So I would just let Puppet install the package, but do not start the service, then copy my file, then I will start the service.
The user type has a parameter of system => which defaults to false, but can be set to true. This will generate the user with a UID below 500. Which seems to be what you want.
Ultimately what you'll want to do in my opinion is manage the config directory and the config via puppet as well.
This gives you the ability to do things like such:
package { foo: ensure => present }
file {
fooconfdir:
path => '/path/to/fooconfdir',
ensure => directory,
user => whatev,
group => alsowhatev,
require => Package[foo],
mode => morewhatev;
fooconf:
path => '/path/to/fooconfdir/fooconf',
ensure => present,
user => whatev,
content => template('whatev');
}
service { foo: ensure => running, enable => true, subscribe => File[fooconf] }
What that will do , is install your package then manage the config, then restart the service which will use your new config obviously on restart.