Could not open a connection to your authentication agent (REVIEW) - linux

First of all, I am aware that this question has been posted several times in stack overflow here, here, here, as well as in some other places.
However, I decided to open a new thread (and taking the risk of being downvoted) because I don't think there is an actual issue with my machine, but with PUTTY.
Environment description
In a nutshell, I have a windows machine running a virtual machine (VMWare).
Host machine: Windows 7 (64 bit)
Guest machine: CentOS 6 with graphic windows environment.
Network connection perfectly setup, so no problems with firewall. Both machines are pingeable from each other, and I can surf the internet from both
Selinux disabled on guest machine
Putty is properly configured (or so I think). The reasons to back up the statement is that I can SSH the guest machine from the host machine with the encrypted SSH keys that I created for that matter. However, I think there is still some configuration missing. Keep reading.
I have configured GITOLITE on the guest machine, and it is up and running.
Although not relevant for this issue, I have a Samba share configured on the guest machine, were I have all my repos. The share is accessible from the host machine, and I can edit the files with no problem whatsoever.
VM Player 7
Guest machine recently restarted and no additional commands have been issued.
PUTTY installed in the host machine
Case scenario #1 (it works)
This case scenario describes the behaviour I expect to achieve. Basically, this procedure is being done within the VM itself. That means, by operating the machine through the VM Player.
Open a terminal as root
service sshd status yields openssh-daemon (pid 1557) is running...
ssh-add -l yields 2048 1b:31 [...] b8:de Git Admin (RSA), 2048 d2:58 [...] f6:2b pando (RSA) and (2048) be:9b [...] dc:e9 web (RSA). These are the three users I have configured in my virtual machine. The SSH keys have been automatically loaded and added to the list of identities of the SSH service.
Log out as root from the CLI. I am now an standard user (the pando user).
Edit one file in one of the repos
git commit -a -m "My message" is successful because the Git Admin key is in the identity list of the SSH agent
git push origin master is also successful, for the same reasons
Case scenario #2 (it does not work)
This case scenario describes the same procedure, but from the Putty terminal. I added to the Pageant the same SSH keys as described in Case Scenario #1, point 3. It looks like everything is Ok with Putty, because I can successfully SSH my virtual machine
Open the Putty Terminal. I am logged in as user pando (which is one of the identities mentioned in Case Scenario 1).
su
service sshd status yields openssh-daemon (pid 1557) is running... (notice that it is the same result as we got in point #2 of the first case scenario)
ssh-add -l yields Could not open a connection to your authentication agent
Because the previous step failed, I have all the issues described in the hyperlinked threads at the beginning of this post.
Now, I am familiar with that procedure of eval $(ssh-agent) and then to manually add the SSH keys on my SSH folder. In fact, I do that every time I SSH the virtual machine. But I actually prefer not to do it.
I am also familiar with adding some script to the .bashsrc file, but the last time I did it, I got a colateral effect with Puppet.
So the basic question is: What's the difference betwen both case scenarios, even though I am using the same SSH keys? Is it that Pageant is not forwarding the keys? If so, why am I able to SSH my machine? Why should I change the .bashrc file of my pando user in the second case scenario, when I can achieve exactly the same thing without it in the first case scenario? I guess I am missing a fundamental piece of information here
Hope that makes sense.
Regards.

openssh-daemon and authentication-daemon are not the same thing. You are interested in the authentication one aka ssh-agent, which is your personal key-store. The openssh-deamon aka sshd is server that is running system-wide and which is accepting connections to your computer.
Desktop environments usually start authentication agents (ssh-agent, seahorse, gnome-keyring) by default so the ssh-add works for you. But the connection is stored in environment variables, which are dropped in transition from your user to superuser (su).
You can allow connection persistence using -m switch to su. This will preserve environment variables and so your connection to authentication agent.
What's the difference between both case scenarios, even though I am using the same SSH keys?
There should be no difference, except the su part dropping environment variables and not executing .bashrc and similar scripts when changing user (you can force su to behave the same way as a login shell using su -l, but it is not the problem). The problem is that the connection to authentication agent is preserved as environment variable and UNIX domain socket, which gets lost during su. You can use su -m it should work for you.
Is it that Pageant is not forwarding the keys?
Forwarding needs to be allowed in PuTTY:

Related

How to launch a "rogue" cli server as unprivileged user

Let's state a situation:
I have the possibility to run arbitrary commands on a server as an unprivileged user, through "unconventional means".
I do not have the possibility to login using ssh to that server, either as my unprivileged user or anything else. So I do not have currently a CLI allowing me to run any commands I would like in a "normal" way.
I can ping that server and nothing prevents me to connect to arbitrary ports.
I still would like to have a command line to allow me to run arbitrary command as i wish on that server.
Theoretically nothing would prevent me to launch any program as my unprivileged user, including one that would open a port, allow some remote user to connect to it and just forward any commands to bash, returning the result. I just don't know any good program to do that.
So, does any one know? I looked at ways to launch ssh_server as an unprivileged user but some users reported that recent versions of ssh_server do not allow that anymore. Actually I don't even need ssh specifically, any way to get a working CLI would do the trick. Even a crappy node.js program launching an http server would work, as long as I have a CLI (... and it's not excessively crappy, the goal is to have a clean CLI, not something that bugs every two characters).
In case you would ask why I would like to do that, it's not related to anything illegal ^^. I just have to work with a very crappy Jenkins server for which I'm not allowed to have direct access to its agents. Whoever is responsible for that server doesn't give a sh** about its users' needs so we have to use hacky solutions just to have some diagnostic data about that server (like ram, cpu and disk usage, installed programs, etc...). Having a CLI that I can launch some time instead of altering a build configuration and waiting 20 minutes to have an answer about what's going on would really help.
Thanks in advance for any answer.
So do you have shell access to the server at least once? E.g., during the single day of the month when you are physically present at the site of your client or the outsourcing contractor?
And if you have shell access then, can you or your sysmin install Cockpit?
It listens on port 9090.
You can then use the credentials of your local user and open a terminal window in your browser. See sidebar item "Terminal" on the screenshots of the cockpit homepage.
According to the documentation
Cockpit has no special privileges and doesn’t run as root. It creates a session as the logged in user and has the same permissions as that user.

Is it possible to write a shell script that takes input then will ssh into a server and run scripts on my local machine?

I have several scripts on my local machine. These scripts run install and configuration commands to setup my Elasticsearch nodes. I have 15 nodes coming and we definitely do not want to do that by hand.
For now, let's call them Script_A, Script_B, Script_C and Script_D.
Script_A will be the one to initiate the procces, it currently contains:
#!/bin/bash
read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"
ssh root#$hostname
This works fine obviously and I can get into any server I need to. My confusion is running the other scripts remotely. I have read few other articles/SO questions but I'm just not understanding the methodology.
I will have a directory on my machine as follows:
Elasticsearch_Installation
|
|=> Scripts
|
|=> Script_A, Script_B, etc..
Can I run the Script_A, which remotes into the server, then come back to my local and run Script_B and so on within the remote server without moving the files over?
Please let me know if any of this needs to be clarified, I'm fairly new to the Linux environment in general.. much less running remote installs from scripts over the network.
Yes you can. Use ssh in non interactive mode, it will be like launching a command in your local environment.
ssh root#$hostname /remote/path/to/script
Nothing will be changed in your local system, you will be at the same point where you launched the ssh command.
NB: this command will ask you a password, if you want a really non interactive flow, set up host a passwordless login, like explained here
How to ssh to localhost without password?
You have a larger problem than just setting up many nodes: you have to be concerned with ongoing maintenance and administration of all those nodes, too. This is the space in which configuration management systems such as Puppet, Ansible, and others operate. But these have a learning curve to overcome, and they require some infrastructure of their own. You would probably benefit from one of them in the medium-to-long term, but if your new nodes are coming next week(ish) then you probably want a solution that you can use immediately to get up and going.
Certainly you can ssh into the server to run commands there, including non-interactively.
My confusion is running the other scripts remotely.
Of course, if you want to run your own scripts on the remote machine then they have to be present there, first. But this is not a major problem, for if you have ssh then you also have scp, the secure copy program. It can copy files to the remote machine, something like this:
#!/bin/bash
read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"
scp Script_[ABCD] root#${hostname}:./
ssh root#hostname ./Script_A
I also manage Elasticsearch clusters with multiple nodes. A hack that works for me is using Terminator Terminal Emulator and split it into multiple windows/panes, one for each ES node. Then you can broadcast the commands you type in one window into all the windows.
This way, you run commands & view their results almost interactively across all nodes parallely. You could also save this layout of windows in Terminator, and then you can get this view quickly using a shortcut.
PS, this approach will only work of you have only small number of nodes & that too for small tasks only. The only thing that will scale with the number of nodes & the number of times and variety of tasks you need to perform will probably be a config management solution like Puppet or Salt.
Fabric is another interesting project that may be relevant to your use case.

How to use Sudo with a Jenkins bash script over SSH?

I have a physical machine (Win7) and a virtual machine (Red Hat) sharing a network. I am executing a bash script as a Jenkins job using the SSH plugin for Jenkins to deploy an application from the physical machine to the virtual machine.
I cannot use Root users due to security policy on the machine I want the script executed on and am instead limited to using standard users with sudo access.
However, I want the script to run without interruption (I don't think Jenkins even allows you to enter user passwords when running bash scripts? Also, this doesn't seem like best practice anyway).
Is there any method of bypassing the sudo request or configuration of the script that would allow this process to run the way I wish?
You can use a NOPASSWD directive in /etc/sudoers; see the sudoers manual page.
If that is not possible, you can always use expect to feed the password (stored somewhere in the account) to the sudo process. This is probably against your security policy as well. You need to talk to the people who make the policy for advice on how you can automate these jobs (“digital transformation” and all that).

Trying to setup Linux Service in IBM Tivoli Identity Manager (ITIM)

I am currently trying to setup a Linux service with IBM Tivoli Identity Manager (IBM Security Identity Manager) a.k.a. ITIM, to a Linux development server where I work and have had some issues. All our Linux servers use ssh to connect. Our eventual goal is to implement single sign on across our networks using Identity Manager.
In the ITIM web interface, I chose the option MANAGE SERVICES and was displayed a page like the following, where I click the CREATE button to create a new service:
Then I am next shown a page where I choose the kind of service I want to make, in this page I choose the POSIX LINUX option because I want to connect to a Linux Server.
Then on the next page, I am entering the information for my Linux server that I want to connect to, the domain name for the server is phongdev.fit.edu, a server for development work.
Note on this page there is a field titled TIVOLI DIRECTORY INTEGRATOR (TDI) where there is default information for the TDI installation, in my case, TDI is installed on the same server as ITIM is installed, so the localhost domain name should be fine. However when I check the server using netstat command there is nothing running on that port, 16231, so I looked up the instructions for starting the TDIDispatcher on google and was told to run the following command, /etc/init.d/ITIMAd restart at the command line and that appeared to run successfully, however still nothing running on port 16231 on the server.
Since our servers use SSH I was required by ITIM to setup key based authentication, I did setup a key and passphrase on this Linux server using ssh, and entered the data on the next screen of ITIM which looks like the following, but as you can see an error is generated when I choose the TEST CONNECTION button:
I checked the logs and there is no info in the logs for these errors, I am not sure where to move next in trying to solve this issue, i suspect it may be related to the fact that the TDI Dispatcher does not appear to be running on port 16231.
Apart from what Matt said (the link especially is useful), the var/ibm/tivoli/common/TDI logs should tell you what the problem with TDI is when you start it up - if there's a problem.
The port number where it's listening ought to be mentioned somewhere in those logs.
Unless there was an upgrade or multiple attempts to configure the RMI dispatcher I don't see why the port shouldn't be 16231 or 1099.
TDI is probably running on a different port. You didn't specify if TDI is running on Windows or Linux, so my answer is assumes Linux since that is what I am most familiar with.
You can find your port # by looking in the solution.properties file in your TDI/timsol directory. It should be listed as api.remote.naming.port.
TDI runs on the default port 1099. Once you start TDI (service ITIMAd start, or however you start it on your system) use ps auxw | grep -i rmi (or something similar) to find the process. Then use netstat -anp | grep PID where PID is the process ID of the TDI RMI process. You should see immediately what port it is listening on. I am not where I have access to a TDI server right now to get you exact commands, but you should get the idea.
Here is a good article for ISIM 6 (should be the same for ITIM 5.1 on TDI 7) on changing the port # for the RMI:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=%2Fcom.ibm.itim_pim.doc%2Fdispatcher%2Finstall_config%2Ft_changeportnum.htm
If you are experiencing error CTGIMT600E and you have multiple network interfaces on TDI 6 or lower, you may need to specify your server IP (or hostname) as a java property so the TDI RMI binds on the correct interface. Edit <tdi_home>/ibmdisrv and insert -Djava.rmi.server.hostname=<yourhost>. For more infomration refer to this article:
http://www-01.ibm.com/support/docview.wss?uid=swg21381101
If you are still having issues, watch your ITIM msg.log and trace.log when you test the connection and look for clues. Also look at the TDI ibmdi.log which will be located under your TDI directory. That may also help you out.

Best Practice? Restart Centos service via ssh securely?

I have a need to restart a CentOS service remotely via ssh during an automated, unattended process (executing a build on some software from a build server), but am unsure how to best implement security. Help is needed! ;-)
Environment:
Running an ssh login on a remote box, I want to execute on my server something like:
/sbin/service jetty restart.
The ssh call is being made during a maven build process (probably doesn't affect anything, really).
I want the ssh session to login with a user that has practically zero permissions on the server except to execute the above.
I can set up shared key access for the ssh session.
Thanks!
Good idea to use an ssh key. You can then use a 'forced command' for that particular key, so it won't be able to run any other commands. See http://www.eng.cam.ac.uk/help/jpmg/ssh/authorized_keys_howto.html

Resources