Puppet Windows Agent Certificate - puppet

I have puppet master on RHEL 6 and agent on Windows.
IT is showing up properly in the console Web, however it is not downloading new catalogue, due to CA error.
I did renew on client, but the master does not show up the windows cert at all for accepting.

This appears to be the agent has a newer certificate and the master will only accept one certificate per machine (based on fqdn or fully qualified domain name). What you need to do is remove the certificate from the master so that it will accept the new request from the machine.
Alternatively you should also make sure you are in an elevated process always when running Puppet (unless you are in advanced scenarios where you are using lower privileges and know all the ins and outs of what that entails on Windows). The reason? Puppet home for elevated processes is in C:\ProgramData\PuppetLabs\Puppet, for non-elevated it is in ~/.puppet (which is usually C:\Users\username\.puppet). A certificate request for each machine can only be accepted once, but a non-elevated process won't see the one in ProgramData and will try unsuccessfully to request another.
Also make sure that the firewall on the Windows machine is not preventing it from accessing the Puppet Server, the port is usually 8140. This can cause SSL issues in reaching the master.

Related

Unable to use docker due to ZScaler and certificate issues

I have VMware Photon OS running in VMware Player. This will be used as the host OS to run Docker containers.
However, since I'm behind a ZScaler, I'm having issues running commands that access external resources. E.g. docker pull python gives me the following output (I added some line breaks to make it more readable):
error pulling image configuration:
Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/a0/a0d32d529a0a6728f808050fd2baf9c12e24c852e5b0967ad245c006c3eea2ed/data
?Expires=1493287220
&Signature=gQ60zfNavWYavBzKK12qbqwfOH2ReXMVbWlS39oKNg0xQi-DZM68zPi22xfDl-8W56tQmz5WL5j8L39tjWkLJRNmKHwvwjsxaSNOkPMYQmhppIRD0OuVwfwHr-
1jvnk6mDZM7fCrChLCrF8Ds-2j-dq1XqhiNe5Sn8DYjFTpVWM_
&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q:
x509: certificate signed by unknown authority
I have tries to extract the CA root certificates (in PEM format) for ZScaler from my Windows workstation, and have appended them to /etc/pki/tls/certs/ca-bundle.crt. But even after restarting Docker, this didn't solve the issue.
I've read through numerous posts, most referencing the command update-ca-trust which does not exist on my system (even though the ca-certificates package is installed).
I have no idea how to go forward. AFAIK, there are two options. Either:
Add the ZScaler certificates so SSL connections are trusted.
Allow insecure connections to the Docker hub (but even then it will probably still complain because the certificate isn't trusted).
The latter works by the way, e.g. executing curl with the -k option allows me to access any https resource.
The problem is zscaler is acting as MAN-IN-THE-MIDDLE doing the ssl inspecting in your organization (see https://support.zscaler.com/hc/en-us/articles/205059995-How-does-Zscaler-protect-SSL-traffic-).
Since you've tried put the certificate in docker, I guess you've been already familiar with steps described in https://stackoverflow.com/a/36454369/1443505. The answer in this is almost correct for the zscaler scenario. One thing need to note is that because zscaler intercepts the CA tree. We need add all the certificates on the chains.
For now, the certificate chains behind zscaler looks as following
We need to export them all one by one and follow the instructions in https://stackoverflow.com/a/36454369/1443505 for each of them.

Puppet agent returning error on windows 7

I am new to puppet and have a very little knowledge of it. I have just started working on it and trying to configure windows machine as an agent.
While trying to run agent process it is giving following error.
Error: Could not request certificate: No connection could be made because the target machine actively refused it.
Even though servers are in communication. I have tried to google it but didn't success ed.
This was happening because the firewall was active on the master machine and which is why agent machine was not able to communicate master machine.

Could not open a connection to your authentication agent (REVIEW)

First of all, I am aware that this question has been posted several times in stack overflow here, here, here, as well as in some other places.
However, I decided to open a new thread (and taking the risk of being downvoted) because I don't think there is an actual issue with my machine, but with PUTTY.
Environment description
In a nutshell, I have a windows machine running a virtual machine (VMWare).
Host machine: Windows 7 (64 bit)
Guest machine: CentOS 6 with graphic windows environment.
Network connection perfectly setup, so no problems with firewall. Both machines are pingeable from each other, and I can surf the internet from both
Selinux disabled on guest machine
Putty is properly configured (or so I think). The reasons to back up the statement is that I can SSH the guest machine from the host machine with the encrypted SSH keys that I created for that matter. However, I think there is still some configuration missing. Keep reading.
I have configured GITOLITE on the guest machine, and it is up and running.
Although not relevant for this issue, I have a Samba share configured on the guest machine, were I have all my repos. The share is accessible from the host machine, and I can edit the files with no problem whatsoever.
VM Player 7
Guest machine recently restarted and no additional commands have been issued.
PUTTY installed in the host machine
Case scenario #1 (it works)
This case scenario describes the behaviour I expect to achieve. Basically, this procedure is being done within the VM itself. That means, by operating the machine through the VM Player.
Open a terminal as root
service sshd status yields openssh-daemon (pid 1557) is running...
ssh-add -l yields 2048 1b:31 [...] b8:de Git Admin (RSA), 2048 d2:58 [...] f6:2b pando (RSA) and (2048) be:9b [...] dc:e9 web (RSA). These are the three users I have configured in my virtual machine. The SSH keys have been automatically loaded and added to the list of identities of the SSH service.
Log out as root from the CLI. I am now an standard user (the pando user).
Edit one file in one of the repos
git commit -a -m "My message" is successful because the Git Admin key is in the identity list of the SSH agent
git push origin master is also successful, for the same reasons
Case scenario #2 (it does not work)
This case scenario describes the same procedure, but from the Putty terminal. I added to the Pageant the same SSH keys as described in Case Scenario #1, point 3. It looks like everything is Ok with Putty, because I can successfully SSH my virtual machine
Open the Putty Terminal. I am logged in as user pando (which is one of the identities mentioned in Case Scenario 1).
su
service sshd status yields openssh-daemon (pid 1557) is running... (notice that it is the same result as we got in point #2 of the first case scenario)
ssh-add -l yields Could not open a connection to your authentication agent
Because the previous step failed, I have all the issues described in the hyperlinked threads at the beginning of this post.
Now, I am familiar with that procedure of eval $(ssh-agent) and then to manually add the SSH keys on my SSH folder. In fact, I do that every time I SSH the virtual machine. But I actually prefer not to do it.
I am also familiar with adding some script to the .bashsrc file, but the last time I did it, I got a colateral effect with Puppet.
So the basic question is: What's the difference betwen both case scenarios, even though I am using the same SSH keys? Is it that Pageant is not forwarding the keys? If so, why am I able to SSH my machine? Why should I change the .bashrc file of my pando user in the second case scenario, when I can achieve exactly the same thing without it in the first case scenario? I guess I am missing a fundamental piece of information here
Hope that makes sense.
Regards.
openssh-daemon and authentication-daemon are not the same thing. You are interested in the authentication one aka ssh-agent, which is your personal key-store. The openssh-deamon aka sshd is server that is running system-wide and which is accepting connections to your computer.
Desktop environments usually start authentication agents (ssh-agent, seahorse, gnome-keyring) by default so the ssh-add works for you. But the connection is stored in environment variables, which are dropped in transition from your user to superuser (su).
You can allow connection persistence using -m switch to su. This will preserve environment variables and so your connection to authentication agent.
What's the difference between both case scenarios, even though I am using the same SSH keys?
There should be no difference, except the su part dropping environment variables and not executing .bashrc and similar scripts when changing user (you can force su to behave the same way as a login shell using su -l, but it is not the problem). The problem is that the connection to authentication agent is preserved as environment variable and UNIX domain socket, which gets lost during su. You can use su -m it should work for you.
Is it that Pageant is not forwarding the keys?
Forwarding needs to be allowed in PuTTY:

How can we remove on server from Directory in OpenLdap?

We installed a OpenLDAP 2.4.31 solution on debian; and several machines in the site are using it. Though the local authentication is not disabled on the machines.
One of the machines has some problems; and its developers asked us to disable central authentication for it. Due to policy, we are not able to change anything on the machine itself; and only can configure our LDAP server. How can we disable one specific machine to use our LDAP server?
You can remove the address of the machines from the LDAP servers; but make sure the machine doesn't get locked out!

Microsoft IIS 7.5 ftp ssl problem

I have configured IIS 7.5 ftp service to use SSL. I have two environments (one for testing purposes, without ssl). When we activate SSL users can logon and list and get files maybe one time if there lucky, then the host (service?) becomes unreachable for some reason. I have no idea what happens or why the FTP "locks" it self. When the ftp is in the "locked" state i am still able to telnet the ftp service, but login do not work.
The test environment without SSL works perfectly and never locks itself. I have also tried turning off SSL on the production environment and that makes that environment work perfectly too.
So the problem must be with SSL (certificate is from versisign). Have someone experienced the same problem och now what can be the cause of this?
/ Tommy
See this document
Specifically these sections:
Using Windows Firewall with secure FTP over SSL (FTPS) traffic
More Information about Working with Firewalls
(At the bottom)

Resources