Integrating Puppet (v3.8) with Hashicorp Vault as CA? - puppet

I've been putting of on making my Puppet master redundant for quite a while now, but it's starting to become an issue now. So I'm starting to put this on a higher importance issue.
Although making the master itself highly available isn't much of a problem (I've already running it behind a load balancer in anticipation of making it HA).
But the problem is the CA "part" of Puppet. I guess it would be (theoretically) possible to put the Puppet master directory on a shared filesystem and have all Puppet masters use that as their storage.
But I have need for a CA for other purposes anyway, so I've been, for the last year (on and off), looking into Vault.
From the documentation and the information I've seen so far about it, it could solve a whole bunch of problems for me, not just the distributed CA part.
It can acts as a CA, but is there any way to integrate that into Puppet? As in, having Vault acts as a CA for the Puppet master(s)?
PS. Many seems to use Puppet in a masterless capacity, and that would of course negate this CA problem, but for various reasons, I don't want to do that (respect the decision please).

Related

Terraform providers vulnerability detection

Using a lot of (official and non official) terraform providers, I'm looking for a tool to perform security analysis on terraform providers before executing terraform plan/apply commands (and so executing providers code). I want to prevent malicious code from providers to be executed blindly.
I'm basically executing terraform providers mirror command to save local copies of required providers and I'm wondering if I can security scan that result.
I tested kics, checkov and tfsec but they are all looking for security issues in my terraform static code but not in providers.
Do you have any good advices regarding this topic ?
This is actually quite a good question. There are many other problems that can be reduced to same generic question - how to make sure that the thing you downloaded from the internet does not do anything malicious to you like e.g.:
How to make sure that a minecraft plugin does not hack you?
How to make sure that a spring boot dependency does not hack you?
How to make sure that a library xxx you attach to your project does not do harm to you?
Should you use docker image yyy in your project?
Truth is: everything you use has the potential to explode right in your face (or more correctly: right into the face of the system owner). That's why the system owner (usually a company) defines a set of rules to follow what is allowed and what is not allowed. No set of rules you are aware of? Below a set of rules we came up with ourselves when thinking about on-boarding a new library for some projects to use:
Do not take random stuff from github. Take only products with longer history, small bug backlog, little to none past issues in the CVE list, actively maintained.
Do static code analysis yourself. Sometimes it is possible to have tools that work on binaries level do that for you. Sometimes you can do it on source level only. In case of Java libraries, check what tools like Dependency Track think about the library and version you are about to use.
Run the code and see how it works: what does it write, what does it read, what URLs does it communicate with (do a TCP dump if necessary).
Document everything you have done somewhere.
This gives you no 100% confidence that things will not go terribly wrong. But this is a systematic approach that will reduce the risk of doing something stupid.

Security-related question regarding private key in repo for localhost

Secure sockets use a CN check against certs in a trust collection with the domain accepting or connecting. For myself I created a private and public set for localhost and that helps me debug locally. If I wanted to offer an SDK, would it be considered secure to distribute a .key and .cer X509 for this localhost debugging use-case? Or is it always not considered secure to have a .key in any open space at all, because of its potential misuse?
Sorry if this is discussed in other places but I cannot find out a clear answer on it.
This might be somewhat opinionated and also depends on your project somewhat, but I think the main risk is how people will actually use those. Some of them will use it for production for sure, because it is easier, or they don't understand keypairs and just want it to work and so on.
Any project should be secure by default, for everybody involved, including endusers and developers as well if your project is something like a library or component. Secure by default in this case would mean not providing an actual keypair, because that would potentially be a backdoor in case of at least some of its uses - even though it was not meant to be used like that.
Another thing to consider is the reputation of your project. If you include a key and users misuse it on the internet, it will be easy to find and potentially exploit vulnerable instances of your project with tools like Shodan. Nobody will care the developers did it wrong - it will be your project that's found vulnerable.
A better way to consider would be to provide something like an init script that would generate a key and a certificate for that specific instance. It could still be easy for the user and developer, and also secure for everybody. In case of a linux package, this could even be done by the installer script with most packaging solutions so it would be fully transparent for the user.

Why shouldn't passwords be saved inside of appsettings.json in ASP.NET Core?

The official documentation lists the following practices for appsettings.json:
Never store passwords or other sensitive data in configuration provider code or in plain text configuration files.
Don't use production secrets in development or test environments.
Specify secrets outside of the project so that they can't be accidentally committed to a source code repository.
As far as I know the appsettings.json isn't served when you host the app on IIS and therefore can't be accessed from the web. We also host the source code ourselves (i.e. on our own servers). So as far as I can tell, the only real danger is when somebody manages to compromise the whole system and has actual access to the appsettings.json itself.
But are there other reasons for keeping sensitive data outside of appsettings.json? Are there other security aspects I'm overlooking?
I know there are several questions asking how to keep the appsettings.json secure, but not what the actual risks are.
There's many reasons, but the main one you've already mentioned:
it's usually much, much easier to get access to source code, than it is to get to well-guarded secrets (e.g. Azure Vault)
it's much easier to leak the secrets, possibly accidentally (via logs, or someone looking over your shoulder, or someone with access to the CI server)
you won't typically know you've leaked them, as there's typically no or a lot less auditing than with proper systems for keeping secrets
there's no way to limit the people that have access to specific secrets for specific environments
personally, I also dislike having specifically production secrets near my development setup. If I run code as a developer, I want to be 100% sure I'll never be accidentally running against a production environment ("oops, I tested that mass-delete feature...vs production"). If the prod secrets are just not there then there's no mistake to make
and probably many more reasons...
Basically, limiting the surface area for mistakes and security leaks will limit the chance for a problem, even if there is currently no reasonable combination of factors where a mistake or leak would happen.

Git-crypt workflow - deployment to multiple servers or circleci/travisci

Trying to understanding the full workflow of a git-crypt based secret keeping solution.
The tool itself works pretty nicely when on a dev machine, even scaling to multiple developers seems to work fine.
However, it is not clear to me how will this work when deployed to a multiple servers on a cloud, some are created on-demand:
Challenge of unattended creation of GPG key on the new server (someone needs to create the passphrase, or is it in a source control, and than, what is all this even worth?)
Once a GPG is created, how is it being added to the ring?
Say we decide to skip #1 and just share a key across servers, how is the passphrase being supplied as part of the "git-crypt unlock" process?
I've really tried to search, and just couldn't find a good end-to-end workflow.
Like many Linux tools, git-crypt is an example of doing only one thing and doing it well. This philosophy dictates that any one utility doesn't try to provide a whole suite of tools or an ecosystem, just one function that can be chained with others however you like. In this case git-crypt doesn't bill itself as a deployment tool or have any particular integrations into a workflow. Its job is just to allow the git repository to store sensitive data that can be used in some checkouts but not others. The use cases can vary, as will how you chain it with other tools.
Based on the wording of your question I would also clarify that git-crypt is not a "secret keeping solution". In fact it doesn't keep your secrets at all, it just allows you to shuffle around where you do keep them. In this case it enables you to keep secret data in a repository along side non-secret information, but it only does so at the expense of putting the secret keeping burden on another tool. It exchanges one secret for another: your project's version controlled secret component(s) for a GPG key. How you manage the secret is still up to you, but now the secret you need to handle is a GPG key.
Holding the secrets is still up to you. In the case of you and other developers that likely means having a GPG private key file kicking around in your home directory, hopefully protected by a passphrase that is entered into an agent before being dispensed to other programs like git-crypt that call for it.
In the case of being able to automatically deploy software to a server, something somewhere has to be trusted with real secrets. This is often the top-level tool like Ansible or Puppet, or perhaps a CI environment like Gitlab, Travis, or Circle. Usually you wouldn't trust anything but your top level deployment tool with knowing when to inject secrets in an environment and when not to (or in the case of development / staging / production environments, which secrets to inject).
I am not familiar with Circle, but I know with Travis under your projects Settings tab there is an Environment Variables section that you can use to pass private information into the virtual machine. There is some documentation for how to use this. Gitlab's build in CI system has something similar, and can pass different secrets to test vs. deploy environments, etc.
I would suggest the most likely use case for your work flow is to:
Create a special secret variable for use on your production machines that has the passphrase for a GPG key used only for deployments. Whatever you use to create your machines should drop a copy of this key into the system and use this variable to unlock it and add it to an agent.
The deploy script for your project would checkout your git project code, then check for a GPG agent. If an agent is loaded it can try to decrypted the the checkout.
In the case of a developer's personal machine this will find their key, in the case of the auto-created machines it will find the deploy key. Either way you can manage access to the secrets in the deployment environment like one more developer on the project.
Whatever tool you use to create the machines becomes responsible for holding and injecting the secrets, probably in the form of a private key file and a passphrase in an environment variable that is used to load the key file into an agent.

Private secured P2P Network

I know the concept of building a simple P2P network without any server. My problems is with securing the network. The network should have some administrative nodes. So there are two kinds of nodes:
Nodes with privileges
Nodes without privileges
The first question is: Can I assign some nodes more rights than others, like the privileges to send a broadcast message?
How can I secure the network of modified nodes that are trying to get privileges?
I'm really interested in answers and resources than can help me. It is important to me to understand this, and I'm happy to add further information if anything is unclear.
You seem lost, and I used to do research in this area, so I'll take a shot. I feel this question is borderline off-topic, but I tend to error toward leaving things open.
See the P2P networks Chord, CAN, Tapestry, and Pastry for examples of P2P networks as well as psuedo-code. These works are all based off distributed hash tables (DHTs) and have been around for over 10 years now. Many of them have open source implementations you can use.
As for "privileged nodes", your question contradicts itself. You want a P2P network, but you also want nodes with more rights than others. By definition, your network is no longer P2P because peers are no longer equally privileged.
Your question points to trust within P2P networks - a problem that academics have focused on since the introduction of (DHTs). I feel that no satisfactory answer has been found yet that solves all problems in all cases. Here are a few approaches which will help you:
(1) Bitcoin addresses malicious users by forcing all users within their network do perform computationally intensive work. For any member to forge bitcoins that would need more computational power than everyone to prove they had done more work than everyone else.
(2) Give privileges based on reputation. You can calculate reputation in any number of ways. One simple example - for each transaction in your system (file sent, database look up, piece of work done), the requester sends a signed acknowledgement (using private/public keys) to the sender. Each peer can then present the accumulation of their signed acknowledgements to any other peer. Any peer who has accumulated N acknowledgements (you determine N) has more privileges.
(3) Own a central server that hands out privileges. This one is the simplest and you get to determine what trust means for you. You're handing it out.
That's the skinny version - good luck.
I'm guessing that the administrative nodes are different from normal nodes by being able to tell other nodes what to do (and the regular nodes should obey).
You have to give the admin nodes some kind of way to prove themselves that can be verified by other nodes but not forged by them (like a policeman's ID). The Most standard way I can think of is by using TLS certificates.
In (very) short, you create couples of files called key and certificate. The key is secret and belongs to one identity, and the certificate is public.
You create a CA certificate, and distribute it to all of your nodes.
Using that CA, you create "administrative node" certificates, one for each administrative node.
When issuing a command, an administrative node presents its certificate to the "regular" node. The regular node, using the CA certificate you provided beforehand, can make sure the administrative node is genuine (because the certificate was actually signed by the CA), and it's OK to do as it asks.
Pros:
TLS/SSL is used by many other products to create a secure tunnel, preventing "man in the
middle" attacks and
impersonations
There are ready-to-use libraries and sample projects for TLS/SSL in practically every language, from .net to C.
There are revocation lists, to "cancel" certificates that have been stolen (although you'll have to find a way to distribute these)
Certificate verification is offline - a node needs no external resources (except for the CA certificate) for verification
Cons:
Since SSL/TLS is a widely-used system, there are many tools to exploit misconfigured / old clients / servers
There are some exploits found in such libraries (e.g. "heartbleed"), so you might need to patch your software a lot.
This solution still requires some serious coding, but it's usually better to rely on an existing and proven system than to go around inventing your own.

Resources