Sharing credentials from puppet master to agents - puppet

I am facing an issue in for pass db credentials to agents for custom facts .
Unable to fetch the credentials from hiera with puppet lookup in agent

Unable to fetch the credentials from hiera with puppet lookup in agent
That's to be expected. puppet lookup performs a local lookup, not a lookup on the server. As such, it generally is not useful on agent nodes.
It's unclear how exactly you have in mind to use these DB credentials, but "for custom facts" suggests that you want agents to perform queries on local databases as part of the computation of the values of some of your custom facts. There are at least these alternatives that might work better:
Have the server perform the queries and expose the results as class variables, instead of making agents provide the same data as facts.
Embed the required credentials in the custom fact implementation(s).
If in fact the queries are to be performed against a central database instead of local one, then you could also consider a variation on that first option, in which you set up a custom Hiera back end that uses the database as its data source.

Related

Access hiera data from within a Puppet fact

As part of a custom Puppet fact I need to make a database query to fetch some dynamic data. This data will then be used by some resource elsewhere in the Puppet manifests. However to make the database connection I need to be able to read some encrypted data stored within hiera (a password). However I'm not sure how to access this data from within Ruby. Perhaps it is not even possible since the fact will run agent side whilst hiera, used when compiling the catalogue, is run server side. However I am currently making the assumption that I can access hiera using something like the following:
Facter.add(:metadata) do
setcode do
database_password = Hiera.lookup('profile::runner::agent::database_password')
# make the DB connetion and run the query...
make_database_query_and_return_result_as_hash(database_password, Facter.value(:hostname))
end
end
Is it possible to access hiera data from within a fact this way? At present there is a long feedback loop to test this (something we're working on to reduce), so I'd appreciate being pointed in the right direction.
Facter runs on the target node at the beginning of a Puppet run, it sends the facts to the Puppet server which is where any hiera lookups are done. So that will never work because the fact is run before any hiera lookups and on a different machine.
The way to do this is not to do it as a fact but as a custom function, the custom function is ruby code that is run on the Puppet server at the time of catalog the compilation.
https://puppet.com/docs/puppet/7/lang_write_functions_in_puppet.html
And if you're querying the Puppet DB for information you're already on the localhost so authentication should be easy. https://puppet.com/docs/puppetdb/6/api/query/tutorial.html
This forge module might do what you want https://forge.puppet.com/modules/dalen/puppetdbquery

Multiple remote databases, single local database (fancy replication)

I have a PouchDB app that manages users.
Users have a local PouchDB instance that replicates with a single CouchDB database. Pretty simple.
This is where things get a bit complicated. I am introducing the concept of "groups" to my design. Groups will be different CouchDB databases but locally, they should be a part of the user database.
I was reading a bit about "fancy replication" in the pouchDB site and this seems to be the solution I am after.
Now, my question is, how do I do it? More specifically, How do I replicate from multiple remote databases into a single local one? Some code examples will be super.
From my diagram below, you will notice that I need to essentially add databases dynamically based on the groups the user is in. A critique of my design will also be appreciated.
Should the flow be something like this:
Retrieve all user docs from his/her DB into localUserDB
var groupDB = new PouchDB('remote-group-url');
groupDB.replicate.to(localUserDB);
(any performance issues with multiple pouchdb instances 0_0?)
Locally, when the user makes a change related to a specific group, we determine the corresponding database and replicate by doing something like:
localUserDB.replicate.to(groupDB) (Do I need filtered replication?)
Replicate from many remote databases to your local one:
remoteDB1.replicate.to(localDB);
remoteDB2.replicate.to(localDB);
remoteDB3.replicate.to(localDB);
// etc.
Then do a filtered replication from your local database to the remote database that is supposed to receive changes:
localDB.replicate.to(remoteDB1, {
filter: function (doc) {
return doc.shouldBeReplicated;
}
});
Why filtered replication? Because your local database contains documents from many sources, and you don't want to replicate everything back to the one remote database.
Why a filter function? Since you are replicating from the local database, there's no performance gain from using design docs, views, etc. Just pass in a filter function; it's simpler. :)
Hope that helps!
Edit: okay, it sounds like the names of the groups that the user belongs to are actually included in the first database, which is what you mean by "iterate over." No, you probably shouldn't do this. :) You are trying to circumvent CouchDB's built-in authentication/privilege system.
Instead you should use CouchDB's built-in roles, apply those roles to the user, and then use a "database per role" scheme to ensure users only have access to their proper group DBs. Users can always query the _users API to see what roles they belong to. Simple!
For more details, read the pouchdb-authentication README.

Is a puppet node allowed to query anything?

If a client node get compromised, can it ask the Puppet master anything, like the password for the other clients account password ?
The manifest is secure. So are variables declared inside the manifest.
Things that may need security precautions:
files that are available via source => "puppet:///..."
hiera values, if your hierarchy relies on facts such as hostname
Background is - an agent can impersonate another one only to a certain extent, i.e. by manipulating the hostname and fqdn facts etc.
The certname fact cannot be overridden, so the master will never use a "foreign" node block for a rogue agent. Make sure that Hiera only relies on $certname. Try and configure the builtin fileserver to protect private files from rogue agents via auth.conf.
I've found my answer. By default, a node can only acess to its resources, so it can't query anything.

Securing the data accessed by Neo4j queries

I wish to implement security on the data contained in a Neo4j database down to the level of individual nodes and/or relationships.
Most data will be available to all users but some data will be restricted by user role. I can add either properties or labels to the data that I wish to restrict.
I want to allow users to run custom cypher queries against the data but hide any data that the user isn't authorised to see.
If I have to do something from the outside then not only do I have to filter the results returned but I also have to parse and either restrict or modify all queries that are run against the data to prevent a user from writing a query which acted on data that they aren't allowed to view.
The ideal solution would be if there is a low-level hook that allows intercepting the reads of nodes and relationships BEFORE a cypher query acts on those records. The interceptor would perform the security checks and if they fail then it would behave as though the node or relationship didn't exist at all. i.e. the same cypher query would have different results depending on who ran it. And this would apply to all possible queries e.g. count(n) not just those that returned the nodes/relationships.
Can something like this be done? If it's not supported already, is there a suitable place in the code that I could add such a security filter or would it require many code changes?
Thanks, Damon
As Chris stated, it's certainly not trivial on database level, but if you're looking for a solution on application level, you might have a look at Structr, a framework on top of and tightly integrated with Neo4j.
It provides node-level security based on ACLs, with users, groups, and different access levels. The security in Structr is implemented on the lowest level possible, f.e. we only instantiate objects if the querying user has the approriate access rights.
All higher access levels like REST API and UI see only the records available in the user's context.
[1] http://structr.org, https://github.com/structr/structr

neo4j: authentication - only allow reading cypher queries

I'm using neo4j 1.9.4 and I would like to display some information about the graph on a (public) website using neo4jphp. In order to fetch some data I use cypher queries within neo4jphp. Those queries obviously only read data from the graph.
I have to make sure that visitors of the website are unable to modify any data in the graph. Therefore, I set up the authentication-extension plugin and created two users (one with read-only 'RO' and one with read-write 'RW' access rights) as documented there. However, the cypher queries within neo4jphp only work for the user with RW rights but not for the one with RO rights.
I know that http://docs.neo4j.org/chunked/stable/security-server.html#_security_in_depth pretty much explains how to secure neo4j, but I absolutely can't figure out how to do that. Especially the section "arbitrary_code_execution" seems to be interesting, but I don't know how to make use of it.
How can I achieve that reading cypher queries can be executed from the web server? BTW: The web server (to display some results) and neo4j are running on a different machine.
I would appreciate any help, thank you!
EDIT: My scenario is actually not that complicated, so I'm sure there must be a solution for that: From localhost any access (read write) is granted, whereas access from a remote web server is restricted to reading from the graph. How can I achieve that? If that is not possible: How could I restrict access from remote web server to some predefined (cypher) queries, where only some parameters can be supplied by the user?
You should use apache proxy as explained in http://docs.neo4j.org/chunked/stable/security-server.html#_security_in_depth
The information you need is the URL to post a cypher query:
http://localhost:7474/db/data/cypher
neo4php is only a wrapper and will end up posting to that url. You can find more details here : http://docs.neo4j.org/chunked/milestone/rest-api-cypher.html
So basically this means that you only allow queries with the cypher url to have access to the neo4j server.
Regarding read only cypher queries :
I didn't check with neo4jphp, but if you use the REST API directly, you can set the database to read_only by adding to conf/neo4j.properties :
read_only=true
You can check in the webadmin that the server is indeed in read_only mode
Just tested it, the server will accept only read queries :
And will return the following response
{
"message": "Expected to be in a transaction at this point",
"exception": "InternalException",
"fullname": "org.neo4j.cypher.InternalException",
"stacktrace":
[...],
"fullname" : "org.neo4j.graphdb.NotInTransactionException"
}
An alternative answer is to use the Cypher-RS plugin. There is a 1.9 branch.
This allows you to create endpoints that are in essense a single cypher query. (So the query must be predefined).
You could use the mod proxy to restrict to only these predefined queries. I'm not sure if mod proxy allows you to restrict to only GET requests, but if it does, you could allow access to GET requests for the plugin, because it won't allow modification queries to be GET requests.
https://github.com/jexp/cypher-rs

Resources