If a client node get compromised, can it ask the Puppet master anything, like the password for the other clients account password ?
The manifest is secure. So are variables declared inside the manifest.
Things that may need security precautions:
files that are available via source => "puppet:///..."
hiera values, if your hierarchy relies on facts such as hostname
Background is - an agent can impersonate another one only to a certain extent, i.e. by manipulating the hostname and fqdn facts etc.
The certname fact cannot be overridden, so the master will never use a "foreign" node block for a rogue agent. Make sure that Hiera only relies on $certname. Try and configure the builtin fileserver to protect private files from rogue agents via auth.conf.
I've found my answer. By default, a node can only acess to its resources, so it can't query anything.
Related
As part of a custom Puppet fact I need to make a database query to fetch some dynamic data. This data will then be used by some resource elsewhere in the Puppet manifests. However to make the database connection I need to be able to read some encrypted data stored within hiera (a password). However I'm not sure how to access this data from within Ruby. Perhaps it is not even possible since the fact will run agent side whilst hiera, used when compiling the catalogue, is run server side. However I am currently making the assumption that I can access hiera using something like the following:
Facter.add(:metadata) do
setcode do
database_password = Hiera.lookup('profile::runner::agent::database_password')
# make the DB connetion and run the query...
make_database_query_and_return_result_as_hash(database_password, Facter.value(:hostname))
end
end
Is it possible to access hiera data from within a fact this way? At present there is a long feedback loop to test this (something we're working on to reduce), so I'd appreciate being pointed in the right direction.
Facter runs on the target node at the beginning of a Puppet run, it sends the facts to the Puppet server which is where any hiera lookups are done. So that will never work because the fact is run before any hiera lookups and on a different machine.
The way to do this is not to do it as a fact but as a custom function, the custom function is ruby code that is run on the Puppet server at the time of catalog the compilation.
https://puppet.com/docs/puppet/7/lang_write_functions_in_puppet.html
And if you're querying the Puppet DB for information you're already on the localhost so authentication should be easy. https://puppet.com/docs/puppetdb/6/api/query/tutorial.html
This forge module might do what you want https://forge.puppet.com/modules/dalen/puppetdbquery
I am facing an issue in for pass db credentials to agents for custom facts .
Unable to fetch the credentials from hiera with puppet lookup in agent
Unable to fetch the credentials from hiera with puppet lookup in agent
That's to be expected. puppet lookup performs a local lookup, not a lookup on the server. As such, it generally is not useful on agent nodes.
It's unclear how exactly you have in mind to use these DB credentials, but "for custom facts" suggests that you want agents to perform queries on local databases as part of the computation of the values of some of your custom facts. There are at least these alternatives that might work better:
Have the server perform the queries and expose the results as class variables, instead of making agents provide the same data as facts.
Embed the required credentials in the custom fact implementation(s).
If in fact the queries are to be performed against a central database instead of local one, then you could also consider a variation on that first option, in which you set up a custom Hiera back end that uses the database as its data source.
The question may sound odd, but I have a worst case scenario.
My application server is on http://10.10.10.10/app (say it app-server) and http-apache server is on http://some.dns.com/app (say it http-server). Both are different system-server.
I know app-server shouldn't directly accessible publically, but let's assume it is publically accessible. Now Shibboleth is installed on http-server , securing path http://some.dns.com/app/secure . While one servlet is mapped to get attributes from path /secure.
If someone manages to create fake http-apache-server (say fake-http-server) and that too points to app-server. So here fake-http-server can directly have access to /secure path and that server can manually send shibboleth-like attributes and can login in system without protection.
My question here is, Is there a mechanism in Shibboleth where I can check the shibboleth session in my application - not only in http layer.
The mod_shib Apache module sets environment variables by default. These variables cannot be spoofed by a proxying Apache server.
From the docs:
The safest mechanism, and the default for servers that allow for it,
is the use of environment variables. The term is somewhat generic
because environment variables don't necessarily always imply the
actual process environment in the traditional sense, since there's
often no separate process. It really refers to a set of controlled
data elements that the web server supplies to applications and that
cannot be manipulated in any way from outside the web server.
Specifically, the client has no say in them.
If you don't trust the Apache webserver, you can parse the SAML assertion in your code and validate the signatures in the assertion using the certificate provided by the Identity Provider (IdP) making the SAML assertion. But checking signatures is difficult and you need to deal with cases like key rotation and how to handle new certificates being used by the IdP. Shibboleth handles these very difficult and important tasks for you.
I am using TACACS+ to authenticate Linux users using pam_tacplus.so PAM module and it works without issues.
I have modified the pam_tacplus module to meet some of my custom requirements.
I know by default, TACACS+ does not have any means to support linux groups or access level control over linux bash commands, however, I was wondering is there any way that some information could be passed from TACACS+ server side to let the pam_tacplus.so module which can be used to allow/deny , or modify the user group on the fly [from pam module itself].
Example: If I could pass the priv-lvl number from server to the client and which could be used for some decision making at the PAM module.
PS: I would prefer a method which involved no modification at the server side [code], all modification should be done at Linux side ie pam_tacplus module.
Thanks for any help.
Eventually I got it working.
Issue 1:
The issue I faced was there is very few documentation available to configure TACACS+ server for a non CISCO device.
Issue 2:
The tac_plus version that I am using
tac_plus -v
tac_plus version F4.0.4.28
does not seem to support
service = shell protocol = ssh
option in tac_plus.conf file.
So eventually I used
service = system {
default attribute = permit
priv-lvl = 15
}
On the client side (pam_tacplus.so),
I sent the AVP service=system at authorization phase(pam_acct_mgmt), which forced the service to return priv-lvl defined at the configuration file, which I used to device privilege level of the user.
NOTE: In some documentations it is mentioned that service=system is not used anymore. So this option may not work with CISCO devices.
HTH
Depending on how you intend to implement this, PAM may be insufficient to meet your needs. The privilege level from TACACS+ isn't part of the 'authentication' step, but rather the 'authorization' step. If you're using pam_tacplus, then that authorization takes place as part of the 'account' (aka pam_acct_mgmt) step in PAM. Unfortunately, however, *nix systems don't give you a lot of ability to do fine grained control here -- you might be able to reject access based on invalid 'service', 'protocol', or even particulars such as 'host', or 'tty', but probably not much beyond that. (priv_lvl is part of the request, not response, and pam_tacplus always sends '0'.)
If you want to vary privileges on a *nix system, you probably want to work within that environments capabilities. My suggestion would be to grouping as a means of producing a sort of 'role-based' access controls. If you want these to exist on the TACACS+ server, then you'll want to introduce custom AVP that are meaningful, and then associate those with the user.
You'll likely need an NSS (name service switch) module to accomplish this -- by the time you get to PAM, OpenSSH, for example, will have already determined that your user is "bogus" and send along a similarly bogus password to the server. With an NSS module you can populate 'passwd' records for your users based on AVPs from the TACACS+ server. More details on NSS can be found in glibc's documentation for "Name Service Switch".
I have the following setup:
A server (centOS) with git and a repository for a project on the same server.
What I need to do is to be able to pull from the repository without being asked for password (because is annoying).
Note: I am logged as root when I pull.
Can anyone help me with that?
There are a few options, depending on what your requirements are, in particular your security needs. For both HTTP and SSH, there is password-less, or password required access.
HTTP
==============
Password-Less
Useful for fetch only requirements, by default push is disabled. Perfect if anonymous cloning is the intention. You definitely shouldn't enable push for this type of configuration. The man page for git-http-backend contains good information, online copy at http://www.kernel.org/pub/software/scm/git/docs/git-http-backend.html. It provides an example of how to configure apache to provide this.
User/password in .netrc or url embedded
Where .netrc files are using in the form:
machine <hostname> login <username> password <password>
And embedded urls would be in the form:
http://user:pass#hostname/repo
Since git won't do auth for you, you will need to configure a webserver such as apache to perform the auth, before passing the request onto the git tools. Also keep in mind that using the embedded method is a security risk, even if you use https since it is part of the url being requested.
If you want to be able to pull non-interactive, but prevent anonymous users from accessing the git repo, this should be a reasonably lightweight solution using apache for basic auth and preferably the .netrc file to store credentials. As a small gotcha, git will enable write access once authentication is being used, so either use anonymous http for read-only, or you'll need to perform some additional configuration if you want to prevent the non-interactive user from having write access.
See:
httpd.apache.org/docs/2.4/mod/mod_auth_basic.html for more on configuring basic auth
www.kernel.org/pub/software/scm/git/docs/git-http-backend.html for some examples on the apache config needed.
SSH
==============
Passphrase-Less
Opens up for security issues, since anyone who can get a hold of the ssh private key can now update the remote git repo as this user. If you want to use this non-interactively, I'd recommend installing something like gitolite to make it a little easier to ensure that those with the ssh private key can only pull from the repo, and it requires a different ssh key pair to update the repo.
See github.com/sitaramc/gitolite/ for more on gitolite.
stromberg.dnsalias.org/~strombrg/ssh-keys.html - for creating password less ssh keys:
May also want to cover managing multiple ssh keys: www.kelvinwong.ca/2011/03/30/multiple-ssh-private-keys-identityfile/
Passphase protected
Can use ssh-agent to unlock on a per-session basis, only really useful for interactive fetching from git. Since you mention root and only talk about performing 'git pull', it sounds like your use case is non-interactive. This is something that might be better combined with gitolite (github.com/sitaramc/gitolite/).
Summary
==============
Using something like gitolite will abstract a lot of the configuration away for SSH type set ups, and is definitely recommended if you think you might have additional repositories or need to specify different levels of access. It's logging and auditing are also very useful.
If you just want to be able to pull via http, the git-http-backend man page should contain enough information to configure apache to do the needful.
You can always combine anonymous http(s) for clone/pull, with passphrase protected ssh access required for full access, in which case there is no need to set up gitolite, you'll just add the ssh public key to the ~/.ssh/authorized_keys file.
See the answer to this question. You should use the SSH access instead of HTTPS/GIT and authenticate via your SSH public key. This should also work locally.
If you're using ssh access, you should have ssh agent running, add your key there and register your public ssh key on the repo end. Your ssh key would then be used automatically. This is the preferred way.
If you're using https access, you one would either
use a .netrc file that contains the credentials or
provide user/pass in the target url in the form https://user:pass#domain.tld/repo
With any of these three ways, it shouldn't ask for a password.