Access hiera data from within a Puppet fact - puppet

As part of a custom Puppet fact I need to make a database query to fetch some dynamic data. This data will then be used by some resource elsewhere in the Puppet manifests. However to make the database connection I need to be able to read some encrypted data stored within hiera (a password). However I'm not sure how to access this data from within Ruby. Perhaps it is not even possible since the fact will run agent side whilst hiera, used when compiling the catalogue, is run server side. However I am currently making the assumption that I can access hiera using something like the following:
Facter.add(:metadata) do
setcode do
database_password = Hiera.lookup('profile::runner::agent::database_password')
# make the DB connetion and run the query...
make_database_query_and_return_result_as_hash(database_password, Facter.value(:hostname))
end
end
Is it possible to access hiera data from within a fact this way? At present there is a long feedback loop to test this (something we're working on to reduce), so I'd appreciate being pointed in the right direction.

Facter runs on the target node at the beginning of a Puppet run, it sends the facts to the Puppet server which is where any hiera lookups are done. So that will never work because the fact is run before any hiera lookups and on a different machine.
The way to do this is not to do it as a fact but as a custom function, the custom function is ruby code that is run on the Puppet server at the time of catalog the compilation.
https://puppet.com/docs/puppet/7/lang_write_functions_in_puppet.html
And if you're querying the Puppet DB for information you're already on the localhost so authentication should be easy. https://puppet.com/docs/puppetdb/6/api/query/tutorial.html
This forge module might do what you want https://forge.puppet.com/modules/dalen/puppetdbquery

Related

How to get the data source for an AWS CloudFront Origin Access Identity in Terraform

We have terraform code in another project that must remain in that separate project that creates three AWS CloudFront Origin Access Identities - one that we want to use for all of our qa environments, one for all of our pprd environments, and one for all of our prod environments.
In another project, how can I use Terraform to get the datasource for these to use them in creating a CloudFront distribution with Terraform?
Does the datasource have to use the OAI ID or name to filter on and how? What happens if the OAI changes. I guess what I am getting at is I would prefer to avoid hard coding the ID or name if possible - Or is that the only way to do this?
We have three OAI's that we will need to use separately - In other words, we will be creating multiple qa distributions that will use the qa OAI, multiple pprd distributions that will use the pprd OAI, and multiple prod distributions that will use the prod OAI.
Let's assume that the ID's are AAAAAAA for the qa one, BBBBBBBB for the pprd one, and CCCCCCC for the prod one (blurred out the real ones in case there is a security issue in posting them).
Yes, you can get the Origin Access Identity created by another stack. In fact there are multiple ways to get it.
The easiest way would be to use a aws_cloudfront_origin_access_identity data source. You can define a data source as follows:
data "aws_cloudfront_origin_access_identity" "example" {
id = "EDFDVBD632BHDS5"
}
The id is the identifier of the distribution. For the attribute references of the data block, you would want to check out the docs.
What happens if the OAI changes?
The data block assumes that the resource already exists in AWS and was created outside of the current state. This means that it will be refreshed every time you do a terraform plan. If something changes on the resource, it will be detected at the next plan.
I guess what I am getting at is I would prefer to avoid hard coding the ID or name if possible.
In case of a data block, you have to provide the ID somehow. This can be either using a variable or hard-coding it. Now, if you really want to avoid this, you can use another method for importing remote resources.
The other option would be to read from a Terraform remote state by having a terraform_remote_state data source. This option is a bit more complex, since the remote state has to expose attributes as outputs. Also, you have to provide the location of the remote state, so can also be considered a hardcoded value.

Sharing credentials from puppet master to agents

I am facing an issue in for pass db credentials to agents for custom facts .
Unable to fetch the credentials from hiera with puppet lookup in agent
Unable to fetch the credentials from hiera with puppet lookup in agent
That's to be expected. puppet lookup performs a local lookup, not a lookup on the server. As such, it generally is not useful on agent nodes.
It's unclear how exactly you have in mind to use these DB credentials, but "for custom facts" suggests that you want agents to perform queries on local databases as part of the computation of the values of some of your custom facts. There are at least these alternatives that might work better:
Have the server perform the queries and expose the results as class variables, instead of making agents provide the same data as facts.
Embed the required credentials in the custom fact implementation(s).
If in fact the queries are to be performed against a central database instead of local one, then you could also consider a variation on that first option, in which you set up a custom Hiera back end that uses the database as its data source.

Database Connection security in nodejs

When I'm connecting to database in node, I have to add db name, username, password etc. If I'm right every user can access js file, when he knows address. So... how it works? Is it safe?
Node.js server side source files should never be accessible to end-users.
In frameworks like Express the convention is that requests for static assets are handled by the static middleware which serves files only from a specific folder in your solution. Explicit requests for other source files that exists in your code base are thus ignored (404 is passed down the pipeline).
Consult
https://expressjs.com/en/starter/static-files.html
for more details.
Although there are other possible options to further limit the visibility of sensitive data, note that anyone on admin rights who gets the access to your server, would of course be able to retrieve the data (and this is perfectly acceptable).
I am assuming from the question that the DB and Node are on the same server. I am also assuming you have created either a JSON or env file or a function which picks up your DB parameters.
The one server = everything (code+DB) is not the best setup in the world. However, if you are limited to it, then it depends on the DB you are using. Mongo Community Edition will allow you to set up limited security protocols, such as creating users within the DB itself. This contains a {username password rights} combination which grants scaled rights based upon the type of user you set up. This is not foolproof but it is something of protection even if someone gets a hold of your DB parameters. If you are using a more extended version of MongoDB then this question would be superfluous. As to other DB's you need to consult the documentation.
However, all that being said, you should really have a DB set up behind a public server and only allow SSH into it, with an open port to receive information from your program. As the one server = everthing format is not safe in the end run, though it is fine for development.
If you are using MongoDB, you may want to take a look at Mongoose coupled with Mongoose Encryption. I personally do not use them but it may solve your problem in the short run.
If your DB is MySQL etc. then I suggest you look at the documentation.

In Travis Public Repository how to add a Secure variable that works on Pull requests too

I have Travis-ci on a public repository. After finishing the execution it generates a image that I want to upload to cloudinary.com, but it could be any other service.
The problem is that to do it, I need to add in .travis.yml the auth token. But I don't want to expose it publicly, and for that travis offers a way to secure Env variables: http://docs.travis-ci.com/user/environment-variables/#Secure-Variables. However they do not work on PULL requests:
Secure Env variables are not available on pull requests from forks due
to security risk of exposing such information to unknown code.
Encryption and decryption keys are tied to the repository. If you fork
a project and add it to Travis CI, it will have different keys to the
original.
Anyone has any idea about how could I add an hidden value that is available for PUSH and PULL REQUESTS?
As you already wrote in your question: according to the official Travis CI documentation https://docs.travis-ci.com/user/environment-variables you won't have access to these variables from untrusted builds such as pull requests. This makes sense, since someone could submit a pull request to your repository containing malicious code which then exposes your secret value.
Bottom line: if you want to make secret values available to pull requests, you have to assume they're not secret anymore - therefore you could also just hard code the unencrypted value to your .travis.yml and use it from there. Which doesn't seem like a good idea. ;-)
Possible solution in your case: you could just use an image hoster which provides anonymous uploading? You wouldn't need an auth key, so your pull requests would be able to upload, too.

Is a puppet node allowed to query anything?

If a client node get compromised, can it ask the Puppet master anything, like the password for the other clients account password ?
The manifest is secure. So are variables declared inside the manifest.
Things that may need security precautions:
files that are available via source => "puppet:///..."
hiera values, if your hierarchy relies on facts such as hostname
Background is - an agent can impersonate another one only to a certain extent, i.e. by manipulating the hostname and fqdn facts etc.
The certname fact cannot be overridden, so the master will never use a "foreign" node block for a rogue agent. Make sure that Hiera only relies on $certname. Try and configure the builtin fileserver to protect private files from rogue agents via auth.conf.
I've found my answer. By default, a node can only acess to its resources, so it can't query anything.

Resources