I want to try out puppet in my test environment. I am fine even if the communication between master and agent is insecure. I am also fine even if there is no authentication between master and agent. I understand the consequences of it. Is there a way to achieve it? Is it possible to disable certificate based authentication between client and master and also disable secure/encrypted communication between master and agent?
Related
I'm interested in using gitlab as an authentication layer for other web apps that are on the same machine. Is there an easy way to make gitlab proxy requests for a given url path to other servers on the local network while requiring that the request came from an user logged into gitlab?
I am a web hosting provider who provides free HTTPS certificates to the domain owners on my platform automatically using Let's Encrypt.
I need to be able to test out repeatedly the process of issuance of certificates for a domain. Since I do not have unlimited domains to test with, I need to be able to fully reset all certificates or any process that have occurred on a domain so I can repeat the process in a test suite.
How can I do this ?
I am using Node/Express and greenlock as my server software.
If you also provide DNS
You should be able to easily set up a testing environment that uses DNS instead of http challenges, so you can test all of the domains without affecting production.
If not, use staging and curl
I think it may be easiest to write you test with bash and curl at first. You can use curl's --with-ca-path= option to allow the staging environment's root authority to pass the checks.
You could also do this more programmatically in node.js, which also has the option to add custom certificate authorities on a per-request basis.
I look for good practices for deploying with capistrano.
I would like to start out with a short description how I used to do deployment.
capistrano is installed locally on a developer's computer. I deploy thought gateway with capistrano option :gateway. Firstly, I thought that with :gateway option I need to have ssh connection only to gateway host, but it turns out that I need ssh connection (public key) to all hosts where I want to deploy to.
I would like to find a convenient and secure way to deploy application.
For example, in case when new developer starts working, is much more convinient to put his *public_key* only on gateway server and not on all applications servers. On the other hand I don't want him to have any connection to servers in particular ssh to gateway, just because he is developer, he needs to do only deployments.
If you are aware of good practices for deploying with capistrano, please, let me know.
Create special user accounts for every developer on the gateway machine as well as on the rest of the server machines. This you will have to do using the abilities your OS and ssh gives you. Make the developer accounts don't have the ability to login via a shell to the gateway etc.
I can't provide you with all the details, but I think I might have directed you in the right direction. You can ask on Server Fault for the details how it is possible to allow an user to login and do only certain tasks on the server.
Digression/Opinion: It's better to have developers which you trust to do the deployments. If you do not trust a dev, better do not let him do crucial things like i.e. deployment to a production server.
I am not able to connect to Azure via a webapp deployed in tomcat.
I am getting the below error though i am sending the correct input for the keystore pwd.
"Keystore was tampered with, or password was incorrect"
Plz comment.
When you try connecting to Windows Azure Management Portal, using Service Management API, the connectivity is created over SSL tunnel and a certificate is used to create the SSL tunnel.
I would suggest first that, it is not a Windows Azure specific problem, it is more of a Java/Tomcat related issue mainly happening because while selecting the certificate to create the SSL tunnel the code met with some problem.
To solve this problem, I can suggest the following:
In your VM/Physical machine, where Tomcat web application is running try to locate the physical keystore file first and delete it.
After that try creating keystore with correct password and setup that password properly in your Tomcat configuration
Trustcacerts password was supplied wrongly ! Now it works
We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..