The basic need:
Perform operation on agent X
Perform operation on agent Y
Perform operation on agent X
These operations need to be performed in order, which is easily achieved with saltstack using publish.publish. Access to perform operations on agent Y is managed by the salt master.
The closest thing I have been able to find in puppet is 'external resources' (ER), since they can be used to perform operations on other agents, but ER's fall short in many ways:
they don't support operation ordering
targeting is based on tags, which does not take security into account
they are so decoupled that you can't know from reading the code where an operation will be performed or where it comes from.
there is no way to get feedback on the success or failure of an operation
Is there any puppet alternative to saltstacks publish.publish?
See my answer to the same question over on ask.puppetlabs.com.
It could be implemented with ssh.
The setup would consist of the Puppet master managing the private and public authentication keys, distributing the private key to all agent X's, and the public key to agent Y.
Then the sequence could be implemented as 3 exec's on the agent X's.
Security could be restricted by using ssh forced commands, ensuring that only the required operation is available for agent X's to invoke on agent Y.
Comments are welcome, as the security implications are beyond me
Related
we are using Azure DevOps as our ALM system. When a user story or bug fix is resolved, it shows in a public query - like a stack - where our QA team members subsequentially pull tickets independently for verification. As this is part of a pull request review, a PR can not be merged unless QA finished testing. So we aim for fast response times and parallelization of testing to minimise the potential of merge conflicts. Often times, we find that multiple work items are self-assigned to the same people, while other team members do not have work items assigned, increasing the potential response times for our devs (unless people change assignments) and leading to a rather subsequential then parallel verification of work items
So we are looking for a way in Azure Dev Ops that allows us to make sure that members of a certain user group can only be assigned one work item of certain work item type and state at the time. We looked into Custom Rules in detail but failed to get anything like this out of it. I'm thankful for any ideas and hints on how this can be accomplished (extensions also welcome)
There is no such rule or policy in Azure DevOps.
And it won't prevent someone from working on it anyway to be honest... I assume testing multiple changes in a single go isn't an option? It would simplefy things tremendously...
Assume the following scenario: we want to implement an open-source password manager that uses a central service that enables the different clients (on different devices) to synchronize their local databases. It doesn't matter if this service is run by a company or on a server of the user (compare to owncloud usage scenarios). To make our application more "secure", we want to use an Intel SGX enclave for the central service (please ignore current attack research on SGX enclaves).
Then, the typical workflow would be:
local client attests central enclave
user registers / logs in
(local and remote database are synced)
user stores / retrieves passwords
Now my question: Does every user of our password manager need to register with the Intel Attestation Service (IAS)? If yes, wouldn't this imply that, since private key sharing is really bad, every single device needs to be registered?
According to my investigations, the answer is, at least for the development and testing phase, yes. I could not find any public information for production scenarios. All I know is that a business registration changes the behavior of the enclave (it can be run in production mode) which is not of any matter here. I have two thoughts on this:
If I am correct, isn't this another show stopper for SGX? Using SGX without the attestation feature seems to be useless.
How do services such as https://www.fortanix.com/ circumvent or solve the problem? Their documentation does not give a hint for needed interaction with Intel.
The above described scenario is only an example; it can be improved and we do not plan to implement it. But it was a lot easier to describe a scenario, that can be easy imagined and seems to be a realistic use case for SGX, than describing our current project plans.
P.S.: This question is kind of consecutive to Intel SGX developer licensing and open-source software
One does not need an Intel-registered certificate to create a quote but one does need to communicate with the IAS (Intel Attestation Service) to verify a quote, which requires an Intel-registered certificate. So every node checking if a remote attestation is valid would require such a certificate in a naive approach.
One could of course leverage SGX to provide a proxy which would be structured somewhat like that:
Generate two certificates and their corresponding private key, I'll name one of them the IAS-conn-cert and the other one the Proxy-cert.
Register the IAS-conn-cert of them to the IAS.
Of course, you need to have to trust that these certificates were indeed generated in an enclave. To do so, you could remotely attest to another service provider you trust.
Now pin (by for example hard-coding) the Proxy-cert in your client application. When it needs to verify a quote, it connects to the enclave using that pinned proxy-cert thus knowing it connects to the enclave. The enclave will then connect to the IAS and relay everything it receives from the client to the IAS and vise-versa. The client can now communicate with the IAS without having to own an IAS-registered certificate but can still be assured that there is no tampering in the proxy, given that it trusts that the proxy-certificate was indeed generated in a non-malicious enclave.
Trying to understanding the full workflow of a git-crypt based secret keeping solution.
The tool itself works pretty nicely when on a dev machine, even scaling to multiple developers seems to work fine.
However, it is not clear to me how will this work when deployed to a multiple servers on a cloud, some are created on-demand:
Challenge of unattended creation of GPG key on the new server (someone needs to create the passphrase, or is it in a source control, and than, what is all this even worth?)
Once a GPG is created, how is it being added to the ring?
Say we decide to skip #1 and just share a key across servers, how is the passphrase being supplied as part of the "git-crypt unlock" process?
I've really tried to search, and just couldn't find a good end-to-end workflow.
Like many Linux tools, git-crypt is an example of doing only one thing and doing it well. This philosophy dictates that any one utility doesn't try to provide a whole suite of tools or an ecosystem, just one function that can be chained with others however you like. In this case git-crypt doesn't bill itself as a deployment tool or have any particular integrations into a workflow. Its job is just to allow the git repository to store sensitive data that can be used in some checkouts but not others. The use cases can vary, as will how you chain it with other tools.
Based on the wording of your question I would also clarify that git-crypt is not a "secret keeping solution". In fact it doesn't keep your secrets at all, it just allows you to shuffle around where you do keep them. In this case it enables you to keep secret data in a repository along side non-secret information, but it only does so at the expense of putting the secret keeping burden on another tool. It exchanges one secret for another: your project's version controlled secret component(s) for a GPG key. How you manage the secret is still up to you, but now the secret you need to handle is a GPG key.
Holding the secrets is still up to you. In the case of you and other developers that likely means having a GPG private key file kicking around in your home directory, hopefully protected by a passphrase that is entered into an agent before being dispensed to other programs like git-crypt that call for it.
In the case of being able to automatically deploy software to a server, something somewhere has to be trusted with real secrets. This is often the top-level tool like Ansible or Puppet, or perhaps a CI environment like Gitlab, Travis, or Circle. Usually you wouldn't trust anything but your top level deployment tool with knowing when to inject secrets in an environment and when not to (or in the case of development / staging / production environments, which secrets to inject).
I am not familiar with Circle, but I know with Travis under your projects Settings tab there is an Environment Variables section that you can use to pass private information into the virtual machine. There is some documentation for how to use this. Gitlab's build in CI system has something similar, and can pass different secrets to test vs. deploy environments, etc.
I would suggest the most likely use case for your work flow is to:
Create a special secret variable for use on your production machines that has the passphrase for a GPG key used only for deployments. Whatever you use to create your machines should drop a copy of this key into the system and use this variable to unlock it and add it to an agent.
The deploy script for your project would checkout your git project code, then check for a GPG agent. If an agent is loaded it can try to decrypted the the checkout.
In the case of a developer's personal machine this will find their key, in the case of the auto-created machines it will find the deploy key. Either way you can manage access to the secrets in the deployment environment like one more developer on the project.
Whatever tool you use to create the machines becomes responsible for holding and injecting the secrets, probably in the form of a private key file and a passphrase in an environment variable that is used to load the key file into an agent.
Ubuntu 14.04
I'm not too sure about this, If I look in the contents of ~/.ssh/ I have a few files in there, I'm just about to setup a key for use with BitBucket.
I'm not sure if I'm meant to have multiple keys for different purposes or if I should have one key that is used for lots of things to identify me.
Cheers
Anyway, the first thing you need is to create a pair of private and public ssh keys. It could be done by executing ssh-keygen command in the terminal.
To be short - the public key (id_rsa.pub) is used by the third-party servers and services like BitBucket to identify you. So you need to provide them this information. For example, add a public key to BitBucket account settings.
The same private/public keys pair could be used by multiple servers and services to identify you at the same time so usually you don't need to create multiple pairs.
I use one key per workstation. On each workstation, I generate a new public/private key pair, and then add that to the authorized keys file (or GitHub/Bitbucket account) of all of the machines I need to interact with via SSH.
That way, if my machine is lost, stolen, or I need to replace the hard drive, I can just de-authorize that one machine by deleting its public key from all of the services, while not needing to rotate my keys on all machines.
I have never found a good reason to create a separate key pair per service on a given workstation; that just increases the management overhead without much tangible benefit. You might do it if you were very privacy minded, and didn't want separate services to correlate your keys, but if you're that privacy minded you should already be accessing everything through Tor and probably have entirely separate accounts for each to avoid leaking any information at all.
This is a fundamental design question about the service layer in my application, which forms the core application functionality. Pretty much every remote call reaches a service sooner or later.
Now I am wondering if
every service method should have a User argument, for which the operation should be performed
or if the service should always query the security implementation, which User is currently logged in, and operate on that user
This is basically a flexibility vs security decision, I guess.. What would you do?
There is also a DoS aspect to consider.
One approach is to offer (depending on your context) a publicly available instance / entry point to the services, on a well throttled set-up; and a less restricted instance to an internal trusted environment.
In a similar vein, if you identify where traffic originates you can (or should) be able to provide better QoS to trusted parties.
So, I would possibly keep the core system (the services you write) fairly open / flexible, and handle some of the security related stuff elsewhere (probably in the underlying platform).
Just because you write one set of services doesn't mean you can only expose those in one place and all at the same time (to the same clients).
I think you should decide which methods will need a user argument and which will need a logged in user. You'll get the following method types as a result for this:
1.) Type1: Method is best to have a User argument.
2.) Type2: Method is best to not have a User argument.
3.) Type3: A combination of 1.) and 2.)
The solution of 1.) and 2.) is simple, because they are trivial cases.
The solution of 3.) is to overload the method to have a version of 1.) type and another version of 2.) type.
I try to look at security as an aspect. User argument is required for things other than authentication as well. But, I think control should reach the service layer's more important methods only if the user has been authenticated by some other filter. You can't have every method in the service layer querying the security module before proceeding.