Some disclosure:
I'm using a master/agent setup in which I own the agent but do not have permission to the master console. The puppetmaster is git-backed, and I control the source for the module(s) in question.
I have 2 relevant modules for my question. One of them, which appears to work just fine, ensures autofs is installed and has 2 file resources for auto.master and a custom auto.home to mount home directories.
#auto.home
#this file is used by auto.master to automount home directories
#from the nfs cluster when a user logs in.
* -fstype=nfs,rw,nosuid,soft <IPaddress>:/homedirs/&
In the module to add home directories, I'm creating users and deploying their public ssh keys via a file resource. This module "works" on systems when I comment out the Class dependency and I'm not mounting /home to NFS, and it *sometimes works when I'm deploying it as-is over NFS.
define local_user(
$fullname,
$username = $title,
$userid,
$gid = 9999,
$homedir_mode = 0700
) {
$white_gid = $gid
user { $username:
ensure => present,
comment => $fullname,
gid => $white_gid,
uid => $userid,
home => $homedir,
require => Group[ "white" ],
}
exec { "chage -M 99999 ${username}":
command => "chage -M 99999 ${username}",
path => "/bin:/sbin:/usr/bin:/usr/sbin",
# chage(1) only works on local users, not on LDAP users,
# so make sure this is a local user before we try to
# change their password expiration.
onlyif => "grep -q '^${username}:' /etc/passwd",
subscribe => User[ $username ],
refreshonly => true,
}
file { $homedir:
ensure => directory,
owner => $username,
group => $white_gid,
mode => $homedir_mode,
require => User[ $username ],
}
file { "$homedir/.ssh":
ensure => directory,
owner => $username,
group => $white_gid,
mode => 0700,
require => File[ "$homedir" ],
}
file { "$homedir/.ssh/authorized_keys":
ensure => present,
owner => $username,
group => $white_gid,
mode => 0600,
source => "puppet:///modules/ssh_keys/${username}_authorized_keys",
require => File["$homedir/.ssh"],
}
}
class ssh_keys {
group { "white":
ensure => present,
gid => 9999,
require => Class["nfs_homedirs"],
}
#### add users below this line
local_user { "userA" : fullname => "userA", userid => "123" }
Some things I'm puzzled by and could use expertise with:
In order for the NFS home directories to work at all, I had to run the module on a machine to create the users locally, then mount the root directory of the NFS mount for home directories and create those user's folders owned by their uid/gid for the autofs to actually work when they log in.
When the module fails to "work" against the NFS-mounted home directories, the error is 'Permission denied' when it tries to create the home folder. I've tried no_root_squash to combat the error, but to no avail. I have tried running the agent as root, as not-root via sudo, as not-root at all, etc.
Error:
/Stage[main]/Ssh_keys/Local_user[userA]/File[/home/userA]/ensure:
change from absent to directory failed: Could not set 'directory' on ensure:
Permission denied - /home/userA at
80:/app/puppet/conf/environments/puppet_dev/modules/ssh_keys/manifests/init.pp
It's seemingly harmless to put ensure => present statements on these directories and file resources. They're technically already created on the NFS share, but the way autofs seems to work is that it won't actually "mount" that user's share until they login. It's not my expertise, but that's what I experience. When this module does run successfully, every user's home directory it creates shows as a mount in the df output.
I suspect that there's something on the machine itself that's preventing this module from working the way it should. Knowing that there are probably 500 things that I could diff between a machine where this module runs clean and one where it doesn't, what are some places I should investigate?
Any assistance would be greatly appreciated.
The way auto.home works is to mount the directory when the user logs in. If the user hasn't logged in, no mount exists -- and thus your directory/file resources fail.
Personally I wouldn't try creating home directories over an nfs mount. Plus you don't want multiple servers trying to manage the same physical resources. Split this out to run only on the NFS server if possible and run all your file resources related to the home directories there. Have the nfs clients just ensure nfs is configured and the local user accounts exist.
If you can't run puppet on the NFS server, pick 1 server to mount it as a regular mount -- i.e. mount the root of the home dirs section so they are all visible. Have no_root_squash set also. Then you should be able to have puppet create the directories.
Also the ssh_authorized_key resource is handy. I use it often.
It sounds to me like selinux is being enforced, which would cause a permission denied as described even if you have the right user/uid owning the directories. If you have selinux enforced, then you'll want to check to see if using nfs_home_dirs is allowed. First, check by running:
getsebool use_nfs_home_dirs
If it comes back as use_nfs_home_dirs --> off, then you can either manually correct this using setsebool -P use_nfs_home_dirs 1, or you can use puppet to manage this as well:
include selinux
selinux::boolean {'use_nfs_home_dirs':
ensure => 'on',
}
Related
I'm hoping to use puppet to manage my rc files (i.e. sharing configuration files between work and home). I keep my rc files in a subversion respository. Some machines, I have sudo privileges on, some I don't. And none of the machines are on the same network.
I have a simple puppet file:
class bashResources ( $home, $svn ) {
file { "$home/.bash" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d/bashrc" :
ensure => present,
target => "$home/$svn/rc/bashrc",
}
}
node 'ubuntuwgu290' {
class { 'bashResources':
home => '/home/dshaw',
svn => 'mysvn',
}
}
I have a simple config file that I'm using to squelch some errors:
[main]
report=false
When I run puppet, I get an annoying error about not being able to execute chown:
dshaw#ubuntuwgu290:~/mysvn/rc$ puppet apply rc.pp --config ./puppet.conf
Notice: Compiled catalog for ubuntuwgu290.maplesoft.com in environment production in 0.12 seconds
Error: Failed to apply catalog: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/state.yaml20170316-894-rzkggd
Error: Could not save last run local report: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/last_run_summary.yaml20170316-894-l9embs
I have attempted to squelch the error by adding reports=none to my config file, but it has not been effective.
How can I squelch these errors? Alternatively, is there a more lightwieght tool for managing rc files?
Thanks,
Derek
The error is related to Puppet trying to manage its own metadata in /home/dshaw/.puppet, not any of the files enrolled in Puppet's catalog for management. This is not normally a problem, even when you run Puppet as an ordinary user. In fact, supporting this sort of thing is one of the reasons why per-user Puppet metadata exists.
The files that Puppet is trying to chown do not already belong to you (else Puppet would not be trying to chown them), but they should belong to you, where "you" means the puppet process's (e)UID and (e)GID. You might be able to solve the problem by just removing Puppet's state directory, and letting it rebuild it on the next run. Alternatively, you might be able to perform or arrange for a manual chown such as Puppet is trying to perform.
On the other hand, it's unclear how this situation arose in the first place, and some of the mechanisms I can imagine would render those suggestions ineffective.
I am currently hitting, for me, somewhat unintuitive behaviour in Puppet - most likely because I don't completely understand the Puppet ethos yet.
OK I have a simple puppetsimple.sh running in the puppet agent which is applying configurations from puppet master. This is all running smoothly and as expected.
Unintuitive (for me) However when I, as part of setting up master, create an error, and then run puppetsimple.sh in the agent, it will strike the error, notify me of it, and continue to apply all the other changes for that configuration.
This effectively leaves the agent in a broken state, because it pushes ahead even when there is an error.
Is there a setting somewhere to say "hey, if you strike an error, stop, revert to how you were, and carry on your merry way"?
Given the example below. I am intentionally enabling an invalid conf file (.confX) - I get notified of the error, but it continues to populate "index.html" with "Hello World 3".
define a2ensite {
exec { 'a2ensite':
path => [ '/bin', '/usr/bin', '/usr/sbin' ],
command => "a2ensite ${title}",
notify => Service['apache2'],
}
}
class mysite {
include apache
file { '/etc/apache2/sites-available/mysite.example.org.conf':
owner => root,
group => root,
mode => 0644,
source => "puppet:///files/mysite/mysite_apache.conf",
notify => Service['apache2'],
}
a2ensite { 'mysite.example.org.confX': }
file { ['/home/', '/home/www/', '/home/www/mysite.example.org']:
ensure => directory,
owner => root,
group => root,
mode => 0755,
}
file { '/home/www/mysite.example.org/index.html':
owner => www-data,
group => www-data,
mode => 755,
content => "Hello World 3",
}
}
If one reosurce failing means that another resource should not be modified, then that is a dependency relationship that you need to model via require. A failing dependency will cause puppet to skip those resources.
But, in general, puppet does not stop or rollback runs when it hits an error. If you need to rollback, it is on you to either revert to an older puppet configuration or use some other capability to revert the node.
I am working on trying to copying files from my puppetmaster to my webserver, which is not working.
On my puppetmaster I have edited the file fileserver.conf and added:
[extra_files]
path /etc/puppet/files
allow *
After that, restarted puppetmaster and puppet on the puppetmaster.
I have an test.txt in the /etc/puppet/files folder
On the webserver I have this apache2.pp script
file { "/test.txt":
mode => "600",
owner => 'root',
group => 'root',
ensure => present,
source => "puppet:///files/test.txt",
}
I am receiving this error, which I am really unsure how to solve:
**Error: /Stage[main]/Main/File[/test.txt]: Could not evaluate: Could not retrieve information from environment production source(s)**
Hope someone can maybe help me with some steps to troubleshoot what is going wrong.
According to description in fileserver.conf:
# [extra_files]
# path /etc/puppet/files
# allow *
#
# In the example above, anything in /etc/puppet/files/<file name> would be
# available to authenticated nodes at puppet:///extra_files/<file name>.
#
change
source => "puppet:///files/test.txt",
to
source => "puppet:///extra_files/test.txt",
Don't use file server mounts unless you have a very good reason to do so.
Instead, create a module that holds the file you need to sync, such as module webserver
mkdir -p /etc/puppet/modules/webserver/files
In your file resource, reference the file as follows:
source => 'puppet:///modules/webserver/test.txt'
Be careful not to include files in the URL of files that are retrieved from within modules.
I am trying to develop a CakePHP application, and I am using Vagrant to run a testing environment. However, I was getting this error in the browser
Warning (2):
session_start() [http://php.net/function.session-start]:
open(/var/lib/php/session/sess_speva7ghaftl8n98r9id5a7434, O_RDWR) failed:
Permission denied (13) [CORE/Cake/Model/Datasource/CakeSession.php, line 614]
I can get rid of the error by SSHing to the vm and doing
[vagrant#myserver ~]$ sudo su -
[root#myserver ~]# chown -R vagrant. /var/lib/php/session/
I don't want to have to do this every time I restart the vm, so I tried adding this to myserver.pp
exec { 'chown':
command => 'chown -R vagrant. /var/lib/php/session/',
path => '/bin',
user => 'root'
}
but it gets an error while starting up the vm...
err:
/Stage[main]/Myserver/Exec[chown]/returns: change from notrun to 0 failed:
chown -R vagrant. /var/lib/php/session/
returned 1 instead of one of [0] at /tmp/vagrant-puppet/manifests/myserver.pp:35
I was unable to find any useful examples of how to use exec on the internet, and I have never used Vagrant or Puppet before, so the above code is just the best guess I could come up with, and I apologize if it is a simple fix to get this working.
I have verified using which chown within the vm that the path is /bin, and the command is exactly the same as when I run it in the vm myself. I'm thinking it is the user that is causing problem. Do I have that line right? Is it even possible to exec commands as root from a .pp file?
When using exec, you normally have to enter the full path to the command you execute. So if you change your command into
exec { 'chown':
command => '/bin/chown -R vagrant:vagrant /var/lib/php/session/',
path => '/bin',
user => 'root'
}
it should work imo.
However, it depends a lot how you install your application. If the setup/start of the application is also managed with Puppet, you can also manage the directory you're interested in with Puppet, like this
file { "/var/lib/php/session" :
ensure => directory,
group => "vagrant",
owner => "vagrant",
recurse => true,
}
before you start your app. This would be much more the Puppet way, as you manage a reource then instead of executing commands. However, normally /var/lib/... should not be owned by someone other than root.
So you should maybe look into how your app is started and make it start with another user or as root. If it is started with an exec, you can add an additional property
user => root
to it and that should also do the trick.
I am trying to do the following using Puppet on an Ubuntu 10.04:
Copy a file that I have to a specific directory which will be owned by a specific user / group that does not exists yet since the package has not been installed
Install the package where it will not remove the directory and file that I created
To accomplish item #1, I basically tell Puppet to create a user and group first before copying the file. But the problem is that if I do not give a specific uid for Puppet, it will randomly pick a number like a number for user and not a number for system package.
So, how do I tell Puppet to choose a uid anything more than 1000?
If this is not possible, how do I tell Puppet not to start the package when it installs it. So I would just let Puppet install the package, but do not start the service, then copy my file, then I will start the service.
The user type has a parameter of system => which defaults to false, but can be set to true. This will generate the user with a UID below 500. Which seems to be what you want.
Ultimately what you'll want to do in my opinion is manage the config directory and the config via puppet as well.
This gives you the ability to do things like such:
package { foo: ensure => present }
file {
fooconfdir:
path => '/path/to/fooconfdir',
ensure => directory,
user => whatev,
group => alsowhatev,
require => Package[foo],
mode => morewhatev;
fooconf:
path => '/path/to/fooconfdir/fooconf',
ensure => present,
user => whatev,
content => template('whatev');
}
service { foo: ensure => running, enable => true, subscribe => File[fooconf] }
What that will do , is install your package then manage the config, then restart the service which will use your new config obviously on restart.