Vagrant puppet change owner of folder in pp exec - puppet

I am trying to develop a CakePHP application, and I am using Vagrant to run a testing environment. However, I was getting this error in the browser
Warning (2):
session_start() [http://php.net/function.session-start]:
open(/var/lib/php/session/sess_speva7ghaftl8n98r9id5a7434, O_RDWR) failed:
Permission denied (13) [CORE/Cake/Model/Datasource/CakeSession.php, line 614]
I can get rid of the error by SSHing to the vm and doing
[vagrant#myserver ~]$ sudo su -
[root#myserver ~]# chown -R vagrant. /var/lib/php/session/
I don't want to have to do this every time I restart the vm, so I tried adding this to myserver.pp
exec { 'chown':
command => 'chown -R vagrant. /var/lib/php/session/',
path => '/bin',
user => 'root'
}
but it gets an error while starting up the vm...
err:
/Stage[main]/Myserver/Exec[chown]/returns: change from notrun to 0 failed:
chown -R vagrant. /var/lib/php/session/
returned 1 instead of one of [0] at /tmp/vagrant-puppet/manifests/myserver.pp:35
I was unable to find any useful examples of how to use exec on the internet, and I have never used Vagrant or Puppet before, so the above code is just the best guess I could come up with, and I apologize if it is a simple fix to get this working.
I have verified using which chown within the vm that the path is /bin, and the command is exactly the same as when I run it in the vm myself. I'm thinking it is the user that is causing problem. Do I have that line right? Is it even possible to exec commands as root from a .pp file?

When using exec, you normally have to enter the full path to the command you execute. So if you change your command into
exec { 'chown':
command => '/bin/chown -R vagrant:vagrant /var/lib/php/session/',
path => '/bin',
user => 'root'
}
it should work imo.
However, it depends a lot how you install your application. If the setup/start of the application is also managed with Puppet, you can also manage the directory you're interested in with Puppet, like this
file { "/var/lib/php/session" :
ensure => directory,
group => "vagrant",
owner => "vagrant",
recurse => true,
}
before you start your app. This would be much more the Puppet way, as you manage a reource then instead of executing commands. However, normally /var/lib/... should not be owned by someone other than root.
So you should maybe look into how your app is started and make it start with another user or as root. If it is started with an exec, you can add an additional property
user => root
to it and that should also do the trick.

Related

Squelch puppet state chown

I'm hoping to use puppet to manage my rc files (i.e. sharing configuration files between work and home). I keep my rc files in a subversion respository. Some machines, I have sudo privileges on, some I don't. And none of the machines are on the same network.
I have a simple puppet file:
class bashResources ( $home, $svn ) {
file { "$home/.bash" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d" :
ensure => 'directory',
}
file { "$home/.bash/bashrc.d/bashrc" :
ensure => present,
target => "$home/$svn/rc/bashrc",
}
}
node 'ubuntuwgu290' {
class { 'bashResources':
home => '/home/dshaw',
svn => 'mysvn',
}
}
I have a simple config file that I'm using to squelch some errors:
[main]
report=false
When I run puppet, I get an annoying error about not being able to execute chown:
dshaw#ubuntuwgu290:~/mysvn/rc$ puppet apply rc.pp --config ./puppet.conf
Notice: Compiled catalog for ubuntuwgu290.maplesoft.com in environment production in 0.12 seconds
Error: Failed to apply catalog: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/state.yaml20170316-894-rzkggd
Error: Could not save last run local report: Operation not permitted # rb_file_chown - /home/dshaw/.puppet/var/state/last_run_summary.yaml20170316-894-l9embs
I have attempted to squelch the error by adding reports=none to my config file, but it has not been effective.
How can I squelch these errors? Alternatively, is there a more lightwieght tool for managing rc files?
Thanks,
Derek
The error is related to Puppet trying to manage its own metadata in /home/dshaw/.puppet, not any of the files enrolled in Puppet's catalog for management. This is not normally a problem, even when you run Puppet as an ordinary user. In fact, supporting this sort of thing is one of the reasons why per-user Puppet metadata exists.
The files that Puppet is trying to chown do not already belong to you (else Puppet would not be trying to chown them), but they should belong to you, where "you" means the puppet process's (e)UID and (e)GID. You might be able to solve the problem by just removing Puppet's state directory, and letting it rebuild it on the next run. Alternatively, you might be able to perform or arrange for a manual chown such as Puppet is trying to perform.
On the other hand, it's unclear how this situation arose in the first place, and some of the mechanisms I can imagine would render those suggestions ineffective.

Cleaner way to restart daemontools services

In our product, we had created services using daemontools. One of my service looks like this,
/service/test/run
/service/test/log/run (has multilog command to log into ./main dir)
/service/test/log/main/..
All the process and its directories are owned by root user. Now there is a security requirement to change like this,
Service should run in non-root user.
Log main directory should be readable only to user and groups.
For this, I have to change the 'run' file under 'log' directory. Also I need to change the permissions of 'main' directory under it.
Note that all these files under '/service' were owned by test-1.0-0.rpm. When I update my rpm, it overrides the existing run file and got error like this,
multilog: fatal: unable to lock directory ./main: access denied
I know we shouldn't override the 'run' file at run time. I have planned to follow these steps in my rpm script %post section,
//Stop service
svc -d /service/test/log
//Moving the main directory
mv /service/test/log/main /service/test/log/main_old
//Updated run file has code to create main with limited permissions.
//Start service
svc -u /service/test/log
In some articles, they suggested to recreate the 'lock' file under 'log/main'. Is there any other cleaner way of doing this without moving 'main' directory ? If not, is it safe to go with the above steps ?

puppet module gets 'Permission denied' applying NFS-based home directories

Some disclosure:
I'm using a master/agent setup in which I own the agent but do not have permission to the master console. The puppetmaster is git-backed, and I control the source for the module(s) in question.
I have 2 relevant modules for my question. One of them, which appears to work just fine, ensures autofs is installed and has 2 file resources for auto.master and a custom auto.home to mount home directories.
#auto.home
#this file is used by auto.master to automount home directories
#from the nfs cluster when a user logs in.
* -fstype=nfs,rw,nosuid,soft <IPaddress>:/homedirs/&
In the module to add home directories, I'm creating users and deploying their public ssh keys via a file resource. This module "works" on systems when I comment out the Class dependency and I'm not mounting /home to NFS, and it *sometimes works when I'm deploying it as-is over NFS.
define local_user(
$fullname,
$username = $title,
$userid,
$gid = 9999,
$homedir_mode = 0700
) {
$white_gid = $gid
user { $username:
ensure => present,
comment => $fullname,
gid => $white_gid,
uid => $userid,
home => $homedir,
require => Group[ "white" ],
}
exec { "chage -M 99999 ${username}":
command => "chage -M 99999 ${username}",
path => "/bin:/sbin:/usr/bin:/usr/sbin",
# chage(1) only works on local users, not on LDAP users,
# so make sure this is a local user before we try to
# change their password expiration.
onlyif => "grep -q '^${username}:' /etc/passwd",
subscribe => User[ $username ],
refreshonly => true,
}
file { $homedir:
ensure => directory,
owner => $username,
group => $white_gid,
mode => $homedir_mode,
require => User[ $username ],
}
file { "$homedir/.ssh":
ensure => directory,
owner => $username,
group => $white_gid,
mode => 0700,
require => File[ "$homedir" ],
}
file { "$homedir/.ssh/authorized_keys":
ensure => present,
owner => $username,
group => $white_gid,
mode => 0600,
source => "puppet:///modules/ssh_keys/${username}_authorized_keys",
require => File["$homedir/.ssh"],
}
}
class ssh_keys {
group { "white":
ensure => present,
gid => 9999,
require => Class["nfs_homedirs"],
}
#### add users below this line
local_user { "userA" : fullname => "userA", userid => "123" }
Some things I'm puzzled by and could use expertise with:
In order for the NFS home directories to work at all, I had to run the module on a machine to create the users locally, then mount the root directory of the NFS mount for home directories and create those user's folders owned by their uid/gid for the autofs to actually work when they log in.
When the module fails to "work" against the NFS-mounted home directories, the error is 'Permission denied' when it tries to create the home folder. I've tried no_root_squash to combat the error, but to no avail. I have tried running the agent as root, as not-root via sudo, as not-root at all, etc.
Error:
/Stage[main]/Ssh_keys/Local_user[userA]/File[/home/userA]/ensure:
change from absent to directory failed: Could not set 'directory' on ensure:
Permission denied - /home/userA at
80:/app/puppet/conf/environments/puppet_dev/modules/ssh_keys/manifests/init.pp
It's seemingly harmless to put ensure => present statements on these directories and file resources. They're technically already created on the NFS share, but the way autofs seems to work is that it won't actually "mount" that user's share until they login. It's not my expertise, but that's what I experience. When this module does run successfully, every user's home directory it creates shows as a mount in the df output.
I suspect that there's something on the machine itself that's preventing this module from working the way it should. Knowing that there are probably 500 things that I could diff between a machine where this module runs clean and one where it doesn't, what are some places I should investigate?
Any assistance would be greatly appreciated.
The way auto.home works is to mount the directory when the user logs in. If the user hasn't logged in, no mount exists -- and thus your directory/file resources fail.
Personally I wouldn't try creating home directories over an nfs mount. Plus you don't want multiple servers trying to manage the same physical resources. Split this out to run only on the NFS server if possible and run all your file resources related to the home directories there. Have the nfs clients just ensure nfs is configured and the local user accounts exist.
If you can't run puppet on the NFS server, pick 1 server to mount it as a regular mount -- i.e. mount the root of the home dirs section so they are all visible. Have no_root_squash set also. Then you should be able to have puppet create the directories.
Also the ssh_authorized_key resource is handy. I use it often.
It sounds to me like selinux is being enforced, which would cause a permission denied as described even if you have the right user/uid owning the directories. If you have selinux enforced, then you'll want to check to see if using nfs_home_dirs is allowed. First, check by running:
getsebool use_nfs_home_dirs
If it comes back as use_nfs_home_dirs --> off, then you can either manually correct this using setsebool -P use_nfs_home_dirs 1, or you can use puppet to manage this as well:
include selinux
selinux::boolean {'use_nfs_home_dirs':
ensure => 'on',
}

Authentication error from server: SASL(-13): user not found: unable to canonify

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

Puppet Dashboard permissions: Permission denied - /var/lib/puppet/reports/

I'm setting up the Puppet Dashboard for the first time. I have it running with the passenger module in Apache.
sudo rake RAILS_ENV=production reports:import
When I run this command, the tasks appear in the dashboard as failed.
630 new failed tasks
The details for each failure look something like this:
Importing report 201212270754.yaml at 2012-12-27 09:21 UTC
Permission denied - /var/lib/puppet/reports/rb-db1/201212270754.yaml
Backtrace
/usr/share/puppet-dashboard/app/models/report.rb:86:in `read'
/usr/share/puppet-dashboard/app/models/report.rb:86:in `create_from_yaml_file'
The report files were owned by puppet:puppet with a 640 permission by default.
I ran chmod a+rw on the reports directory, but I still get the same errors.
Any ideas on what I might be doing wrong here?
If you are running the puppet-dashboard server as root instead of as the puppet-dashboard user, you will see this error. My system is using /usr/share/puppet-dashboard/script/server on centos 6.4 using the puppet-dashboard-1.2.23-1.el6.noarch rpm from puppetlabs.
[root#hadoop01 puppet-dashboard]# cat /etc/sysconfig/puppet-dashboard
#
# path to where you installed puppet dashboard
#
DASHBOARD_HOME=/usr/share/puppet-dashboard
#DASHBOARD_USER=puppet-dashboard
DASHBOARD_USER=root
DASHBOARD_RUBY=/usr/bin/ruby
DASHBOARD_ENVIRONMENT=production
DASHBOARD_IFACE=0.0.0.0
DASHBOARD_PORT=3000
edit the file like above and then run the command
/etc/init.d/puppet-dashboard restart && /etc/init.d/puppet-dashboard-workers restart
my puppet-dashboard version is 1.2.23

Resources