Chef: Enabling Jenkins security causes plugin installation to fail - security

I am currently using Chef to deploy a Jenkins instance on a managed node. I am using the following public supermarket cookbook: https://supermarket.chef.io/cookbooks/jenkins .
I am using the following code in my recipe file to enable authentication:
jenkins_script 'activate global security' do
command <<-EOH.gsub(/^ {4}/, '')
import jenkins.model.*
import hudson.security.*
def instance = Jenkins.getInstance()
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("Administrator","Password")
instance.setSecurityRealm(hudsonRealm)
instance.save()
def strategy = new GlobalMatrixAuthorizationStrategy()
strategy.add(Jenkins.ADMINISTER, "Administrator")
instance.setAuthorizationStrategy(strategy)
instance.save()
EOH
end
This works great to setup security on the instance the first time the recipe is run on the managed node. It creates an administrator user with administrator permissions on the Jenkins server. In addition to enabling security on the Jenkins instance, plugins are also installed using this recipe.
Once security has been enabled, installation of plugins which do not yet exist (but are specified to be installed), fail:
ERROR: anonymous is missing the Overall/Read permission
I assume this is an error related to the newly created administrator account, and Chef attempting to install the plugins using the anonymous user as opposed to the administrator user. Is there anything that should be set in my recipe file in order to work around this permissions issue?
The goal here is that in the event a plugin is upgraded to an undesired version or uninstalled completely, running the recipe will reinstall / rollback any plugin changes. Currently this does not appear to be possible if I also have security enabled on the Jenkins instance.
EDIT It should also be noted that currently each time I need to repair plugins in this way, I have to disable security then run the entire recipe (plugin installation + security enable).
Thanks for any help!

The jenkins_plugin resource doesn't appear to expose any authentication options so you'll probably need to build your own resource. If you dive in to the code you'll see that the underlying executor layer in the cookbook does support auth (and a whole bunch of other stuff) so it might be easy to do in a copy-fork (and send us a patch) of just that resource.

We ran into this because we had previously been defining :jenkins_username and :jenkins_password, but those only work with the remoting protocol which is being deprecated in favor of the REST API being accessed via SSH or HTTPS and in newer releases defaults to DISABLED.
We ended up combining the logic from #StephenKing's cookbook and the information from chef-cookbooks/jenkins and this GitHub issue comment on that repo to get our plugin installation working after enabling authentication via Active Directory on our instances (we used SSH).
We basically pulled the example from https://github.com/TYPO3-cookbooks/jenkins-chefci/blob/e1b82e679074e96de5d6e668b0f10549c48b58d1/recipes/_jenkins_chef_user.rb and removed the portion that automatically generated the key if it didn't exist (our instances stick around and need to be mostly deterministic) and replaced the File.read with a lookup in our encrypted databag (or functional equivalent).
recipes/authentication.rb
require 'aws-sdk'
require 'net/ssh'
require 'openssl'
ssm = Aws::SSM::Client.new(region: 'us-west-2')
unless node.run_state[:jenkins_private_key]
key_contents = ssm.get_parameter(name: node['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path'], with_decryption: true).parameter.value
key_path = node['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path']
key = OpenSSL::PKey::RSA.new key_contents
# We use `log` here so we can assert the correct path was queried without exposing or hardcoding the secret in our tests
log 'Successfully read existing private key from ' + key_path
public_key = [key.ssh_type, [key.to_blob].pack('m0'), 'auto-generated key'].join(' ')
# Create the Chef Jenkins user with the public key
jenkins_user 'chefjenkins' do
id 'chefjenkins' # This also matches up with an Active Directory user
full_name 'Chef Client'
public_keys [public_key]
end
# Set the private key on the Jenkins executor
node.run_state[:jenkins_private_key] = key.to_pem
end
# This was our previous implementation that stopped working recently
# jenkins_password = ssm.get_parameter(name: node['jenkins_wrapper']['secrets']['chefjenkins']['path'], with_decryption: true).parameter.value
# node.run_state[:jenkins_username] = 'chefjenkins' # ~FC001
# node.run_state[:jenkins_password] = jenkins_password # ~FC001
recipes/enable_jenkins_sshd.rb
port = node['jenkins']['ssh']['port']
jenkins_script 'configure_sshd_access' do
command <<-EOH.gsub(/^ {4}/, '')
import jenkins.model.*
def instance = Jenkins.getInstance()
def sshd = instance.getDescriptor("org.jenkinsci.main.modules.sshd.SSHD")
def currentPort = sshd.getActualPort()
def expectedPort = #{port}
if (currentPort != expectedPort) {
sshd.setPort(expectedPort)
}
EOH
not_if "grep #{port} /var/lib/jenkins/org.jenkinsci.main.modules.sshd.SSHD.xml"
notifies :execute, 'jenkins_command[safe-restart]', :immediately
end
attributes/default.rb
# Enable/disable SSHd.
# If the port is 0, Jenkins will serve SSHd on a random port
# If the port is > 0, Jenkins will serve SSHd on that port specifically
# If the port is is -1 turns off SSHd.
default['jenkins']['ssh']['port'] = 8222
# This happens to be our lookup path in AWS SSM, but
# this could be a local file on Jenkins or in databag or wherever
default['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path'] = 'jenkins_wrapper.users.chefjenkins.id_rsa'

Related

SSH on console google cloud permission denied (publickey) with google-cloud-sdk file error

I'm new on cloud computing and I'm trying to use SSH to control my VM instance but when I use command (with debug)
gcloud compute ssh my-instance-name --verbosity=debug
it's show error
DEBUG: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code
[255]. Traceback (most recent call last): File
"/google/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line
983, in Execute
resources = calliope_command.Run(cli=self, args=args) File "/google/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py",
line 784, in Run
resources = command_instance.Run(args) File "/google/google-cloud-sdk/lib/surface/compute/ssh.py", line 262, in
Run
return_code = cmd.Run(ssh_helper.env, force_connect=True) File "/google/google-cloud-sdk/lib/googlecloudsdk/command_lib/util/ssh/ssh.py",
line 1256, in Run
raise CommandError(args[0], return_code=status) CommandError: [/usr/bin/ssh] exited with return code [255]. ERROR:
(gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I try to solve the problem in this link but it's not work
https://groups.google.com/forum/#!topic/gce-discussion/O-c10TM4ZLM
SSH error code 255 is a general error returned by GCP. You can try one of the following options.
1. Wait a few minutes and try again. It is possible that:
The instance has not finished starting up.
Metadata for SSH keys has not finished being propagated to the project or instance.
The Guest Environment has not yet read the SSH keys metadata.
2. Verify that SSH access to the instance is not blocked by a firewall.
gcloud compute firewall-rules list | grep "tcp:22"
If necessary, create a firewall rule to allow TCP 22 for a given VPC network, subnet, or instance tag.
gcloud compute firewall-rules create ssh-allow-incoming --priority=0 --allow=tcp:22 --network=[VPC-Network]
3. Make sure that the root volume is not out of disk space. Messages like the following will be visible in the console log when it is out of disk space:
...No space left on device...
...google-accounts: ERROR Exception calling the response handler.
[Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp',
'/usr/tmp', '/']...
4. Make sure that the instance has not run out of memory
5. Verify that temporary SSH Keys metadata is set for either the project or instance.
Finally you could follow any of their supported or third-party methods
Assuming you have the correct IAM permissions, it is much easier and preferred by GCP to use OSlogin to ssh into an instance, rather than manage ssh keys
in cloud shell, enter this
gcloud compute --project PROJECTID project-info add-metadata --metadata enable-oslogin=TRUE
This enables OSLogin on all instances in a project, instead of using ssh keys gcp will check your IAM permissions and authenticate based on those.
If you are not project owner, make sure you have the compute.osloginviewer or admin permissions in Cloud IAM
Once enables, try SSHing into the instance again using the command you posted.
This is not a concrete answer but I think at first you should set your project by :
gcloud config set project PROJECT_ID
Then
gcloud compute ssh my-instance-name --verbosity=debug
This link would be useful:
https://cloud.google.com/sdk/gcloud/reference/compute/ssh

Puppet error when using classes

I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules

Authentication error from server: SASL(-13): user not found: unable to canonify

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

Groovy: Antbuilder fileset is not created (launched from Jenkins)

I have following code in my script:
def ant_fs = (new AntBuilder())
def fs = ant_fs.fileset( dir: <path> )
fs.each{
println( "Fileset item: $it" )
}
When I launch it from Maven (mvn ... in command line) or from Intellij IDEA I see that fileset object is initialized successfully (I see correct files' pathes).
When I launch this code via Jenkins I see that fs object is not created but I do not see any exception in output.
Could you please help me resolve the issue?
Thanks In Advcance!
Note: I have surefire plugin for Maven2.
Looks like this issue was caused by incorrect user Jenkins Agent settings.
I setup user into Jenkins Service (Win host) as Administrator and my script started to work. It was caused because I work with shared folder on another host which required authentification. I setup authentification on that host for Administrator account, but Jenkins by default launches test as System account.

Puppet not recognising my module

I am trying to create a custom provider for package but for some reasons I keep on getting
err: Could not run Puppet configuration client: Parameter provider
failed: Invalid package provider 'piprs' at
/usr/local/src/ops/services/puppet/modules/test/manifests/init.pp:5
I have added pluginsync=true in puppet.conf in both client and server. I have created the following rb file in module/test/lib/puppet/provider/package/piprs.rb. I am basically trying to create a custom provider for package resource type
#require 'puppet/provider/package'
Puppet::Type.type(:package).provide(:piprs,
:parent => ::Puppet::Provider::Package) do
commands : pip => "/usr/local/bin/pip"
desc "Python packages via `pip`."
def create
pip "freeze"
end
def destroy
end
def exists?
end
end
In the puppet.conf, there is the following source attribute
pluginsource = puppet://puppet/plugins
I am not sure what it is. If you need anymore details, please do post a comment.
First things first - you do realize there is already a Python pip provider in core?
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/package/pip.rb
If that isn't what you want - then lets move on ...
For starters - try your module without a Puppet master - this is going to be better for development anyway. You need to make sure Ruby can find the library path:
export RUBYLIB=<path_to_module>/lib
Then, try writing a small test in a .pp file:
package { "mypackage": provider => "piprs" }
And run it locally:
puppet apply mytest.pp
This will rule out a code bug in your provider versus a plugin sync issue.
I notice there is a space between the colon and the command - that isn't your problem is it?
commands : pip => "/usr/local/bin/pip"
If you can get this working without a puppetmaster, your problem is sync related.
There are a couple of things that can go wrong - make sure the file is sync'd properly on the client:
ls /var/lib/puppet/lib/puppet/provider/package
You should see the piprs.rb file there. If it is, you may need to make sure your libdir is set correctly:
puppet --configprint libdir
This should point to /var/lib/puppet/lib in most cases.

Resources