I have the following code in my manifests/site.pp
file { "/etc/motd":
mode => '0664',
owner => 'root',
group => 'root',
content => 'THE MESSAGE I WANT TO APPEAR'
}
When I ssh into the server I get no message, even though the /etc/motd file exists. If I then edit that file and save it, even if I make no changes to it, exit ssh and reconnect the MOTD appears.
Any ideas on this?
Thanks
Related
I have puppet 6 installed in my environment and would like to ensure that the user centos cannot sudo on all of my agents. I can create somethings like this:
modules/sudoer/manifests/disable_sudo.pp
# Manage the sudoers file
class sudoers {
file { '/etc/sudoers':
source => 'puppet:///modules/sudoers/sudoers',
mode => '0440',
owner => 'root',
group => 'root',
}
}
And then create a modules/sudoers/files/sudoers file and put the content I like in there and make sure the centos line is commented out:
#centos ALL=(ALL) NOPASSWD: ALL
But this is very lengthy and in puppet 3, I could only use sudo::disable_centos: true in the hiera. Is there a better way for letting puppet prevent the user centos from sudo? Thank you
exec { "stop old application instance":
cwd => "${install_dir}",
path => ['/usr/bin','/bin','/usr/sbin','/sbin', '/bin/unlink', '/usr/local', '/usr/local/bin/'],
onlyif => "test -e '${install_dir}/${app_package_dir}/processes.json'",
group => 0,
user => 'root',
command => "pm2 delete /var/lib/application_folder/processes.json"
}
Puppet is getting stuck here and not able to execute the command. Not understanding the reason. Error log is given bellow
Error: Command exceeded timeout
Wrapped exception:
execution expired
Error: /Stage[main]/application::Install/Exec[stop old application instance]/returns: change from notrun to 0 failed: Command exceeded timeout
Any help will be much appreciated.
https://ask.puppet.com/question/1308/command-exceeded-timeout/
Puppet by default expects your command to finish running in 300 seconds or less. Then it gives up waiting and assumes the command failed.
I know nothing about pm2 to help you figure out why your command is taking so long. But if that's normal, then the link above suggests you add
timeout => 1800,
to your puppet definition.
I would recommend to manually find where the pm2 command is coming from ( $ which pm2 ) and then using the entire path of the command instead of using the path property. Something like this:
exec { "stop old application instance":
cwd => "${install_dir}",
onlyif => "test -e '${install_dir}/${app_package_dir}/processes.json'",
user => 'root',
command => "/usr/bin/pm2 delete /var/lib/application_folder/processes.json",
logoutput => 'onfailure',
}
Notice the logoutput property to see the output of the command only in case something breaks. I don't think you need to specify the group
You can try to put below in your puppet global script file :
Exec {
timeout => 0,
}
I am currently hitting, for me, somewhat unintuitive behaviour in Puppet - most likely because I don't completely understand the Puppet ethos yet.
OK I have a simple puppetsimple.sh running in the puppet agent which is applying configurations from puppet master. This is all running smoothly and as expected.
Unintuitive (for me) However when I, as part of setting up master, create an error, and then run puppetsimple.sh in the agent, it will strike the error, notify me of it, and continue to apply all the other changes for that configuration.
This effectively leaves the agent in a broken state, because it pushes ahead even when there is an error.
Is there a setting somewhere to say "hey, if you strike an error, stop, revert to how you were, and carry on your merry way"?
Given the example below. I am intentionally enabling an invalid conf file (.confX) - I get notified of the error, but it continues to populate "index.html" with "Hello World 3".
define a2ensite {
exec { 'a2ensite':
path => [ '/bin', '/usr/bin', '/usr/sbin' ],
command => "a2ensite ${title}",
notify => Service['apache2'],
}
}
class mysite {
include apache
file { '/etc/apache2/sites-available/mysite.example.org.conf':
owner => root,
group => root,
mode => 0644,
source => "puppet:///files/mysite/mysite_apache.conf",
notify => Service['apache2'],
}
a2ensite { 'mysite.example.org.confX': }
file { ['/home/', '/home/www/', '/home/www/mysite.example.org']:
ensure => directory,
owner => root,
group => root,
mode => 0755,
}
file { '/home/www/mysite.example.org/index.html':
owner => www-data,
group => www-data,
mode => 755,
content => "Hello World 3",
}
}
If one reosurce failing means that another resource should not be modified, then that is a dependency relationship that you need to model via require. A failing dependency will cause puppet to skip those resources.
But, in general, puppet does not stop or rollback runs when it hits an error. If you need to rollback, it is on you to either revert to an older puppet configuration or use some other capability to revert the node.
Some disclosure:
I'm using a master/agent setup in which I own the agent but do not have permission to the master console. The puppetmaster is git-backed, and I control the source for the module(s) in question.
I have 2 relevant modules for my question. One of them, which appears to work just fine, ensures autofs is installed and has 2 file resources for auto.master and a custom auto.home to mount home directories.
#auto.home
#this file is used by auto.master to automount home directories
#from the nfs cluster when a user logs in.
* -fstype=nfs,rw,nosuid,soft <IPaddress>:/homedirs/&
In the module to add home directories, I'm creating users and deploying their public ssh keys via a file resource. This module "works" on systems when I comment out the Class dependency and I'm not mounting /home to NFS, and it *sometimes works when I'm deploying it as-is over NFS.
define local_user(
$fullname,
$username = $title,
$userid,
$gid = 9999,
$homedir_mode = 0700
) {
$white_gid = $gid
user { $username:
ensure => present,
comment => $fullname,
gid => $white_gid,
uid => $userid,
home => $homedir,
require => Group[ "white" ],
}
exec { "chage -M 99999 ${username}":
command => "chage -M 99999 ${username}",
path => "/bin:/sbin:/usr/bin:/usr/sbin",
# chage(1) only works on local users, not on LDAP users,
# so make sure this is a local user before we try to
# change their password expiration.
onlyif => "grep -q '^${username}:' /etc/passwd",
subscribe => User[ $username ],
refreshonly => true,
}
file { $homedir:
ensure => directory,
owner => $username,
group => $white_gid,
mode => $homedir_mode,
require => User[ $username ],
}
file { "$homedir/.ssh":
ensure => directory,
owner => $username,
group => $white_gid,
mode => 0700,
require => File[ "$homedir" ],
}
file { "$homedir/.ssh/authorized_keys":
ensure => present,
owner => $username,
group => $white_gid,
mode => 0600,
source => "puppet:///modules/ssh_keys/${username}_authorized_keys",
require => File["$homedir/.ssh"],
}
}
class ssh_keys {
group { "white":
ensure => present,
gid => 9999,
require => Class["nfs_homedirs"],
}
#### add users below this line
local_user { "userA" : fullname => "userA", userid => "123" }
Some things I'm puzzled by and could use expertise with:
In order for the NFS home directories to work at all, I had to run the module on a machine to create the users locally, then mount the root directory of the NFS mount for home directories and create those user's folders owned by their uid/gid for the autofs to actually work when they log in.
When the module fails to "work" against the NFS-mounted home directories, the error is 'Permission denied' when it tries to create the home folder. I've tried no_root_squash to combat the error, but to no avail. I have tried running the agent as root, as not-root via sudo, as not-root at all, etc.
Error:
/Stage[main]/Ssh_keys/Local_user[userA]/File[/home/userA]/ensure:
change from absent to directory failed: Could not set 'directory' on ensure:
Permission denied - /home/userA at
80:/app/puppet/conf/environments/puppet_dev/modules/ssh_keys/manifests/init.pp
It's seemingly harmless to put ensure => present statements on these directories and file resources. They're technically already created on the NFS share, but the way autofs seems to work is that it won't actually "mount" that user's share until they login. It's not my expertise, but that's what I experience. When this module does run successfully, every user's home directory it creates shows as a mount in the df output.
I suspect that there's something on the machine itself that's preventing this module from working the way it should. Knowing that there are probably 500 things that I could diff between a machine where this module runs clean and one where it doesn't, what are some places I should investigate?
Any assistance would be greatly appreciated.
The way auto.home works is to mount the directory when the user logs in. If the user hasn't logged in, no mount exists -- and thus your directory/file resources fail.
Personally I wouldn't try creating home directories over an nfs mount. Plus you don't want multiple servers trying to manage the same physical resources. Split this out to run only on the NFS server if possible and run all your file resources related to the home directories there. Have the nfs clients just ensure nfs is configured and the local user accounts exist.
If you can't run puppet on the NFS server, pick 1 server to mount it as a regular mount -- i.e. mount the root of the home dirs section so they are all visible. Have no_root_squash set also. Then you should be able to have puppet create the directories.
Also the ssh_authorized_key resource is handy. I use it often.
It sounds to me like selinux is being enforced, which would cause a permission denied as described even if you have the right user/uid owning the directories. If you have selinux enforced, then you'll want to check to see if using nfs_home_dirs is allowed. First, check by running:
getsebool use_nfs_home_dirs
If it comes back as use_nfs_home_dirs --> off, then you can either manually correct this using setsebool -P use_nfs_home_dirs 1, or you can use puppet to manage this as well:
include selinux
selinux::boolean {'use_nfs_home_dirs':
ensure => 'on',
}
I am trying to develop a CakePHP application, and I am using Vagrant to run a testing environment. However, I was getting this error in the browser
Warning (2):
session_start() [http://php.net/function.session-start]:
open(/var/lib/php/session/sess_speva7ghaftl8n98r9id5a7434, O_RDWR) failed:
Permission denied (13) [CORE/Cake/Model/Datasource/CakeSession.php, line 614]
I can get rid of the error by SSHing to the vm and doing
[vagrant#myserver ~]$ sudo su -
[root#myserver ~]# chown -R vagrant. /var/lib/php/session/
I don't want to have to do this every time I restart the vm, so I tried adding this to myserver.pp
exec { 'chown':
command => 'chown -R vagrant. /var/lib/php/session/',
path => '/bin',
user => 'root'
}
but it gets an error while starting up the vm...
err:
/Stage[main]/Myserver/Exec[chown]/returns: change from notrun to 0 failed:
chown -R vagrant. /var/lib/php/session/
returned 1 instead of one of [0] at /tmp/vagrant-puppet/manifests/myserver.pp:35
I was unable to find any useful examples of how to use exec on the internet, and I have never used Vagrant or Puppet before, so the above code is just the best guess I could come up with, and I apologize if it is a simple fix to get this working.
I have verified using which chown within the vm that the path is /bin, and the command is exactly the same as when I run it in the vm myself. I'm thinking it is the user that is causing problem. Do I have that line right? Is it even possible to exec commands as root from a .pp file?
When using exec, you normally have to enter the full path to the command you execute. So if you change your command into
exec { 'chown':
command => '/bin/chown -R vagrant:vagrant /var/lib/php/session/',
path => '/bin',
user => 'root'
}
it should work imo.
However, it depends a lot how you install your application. If the setup/start of the application is also managed with Puppet, you can also manage the directory you're interested in with Puppet, like this
file { "/var/lib/php/session" :
ensure => directory,
group => "vagrant",
owner => "vagrant",
recurse => true,
}
before you start your app. This would be much more the Puppet way, as you manage a reource then instead of executing commands. However, normally /var/lib/... should not be owned by someone other than root.
So you should maybe look into how your app is started and make it start with another user or as root. If it is started with an exec, you can add an additional property
user => root
to it and that should also do the trick.