Verbose logging for svn ssh connection - linux

Does anyone one know how get SVN to log the details of the ssh connection when operating through an ssh connection?
When I can't connect svn always gives me:
To better debug SSH connection problems, remove the -q option from 'ssh' in the [tunnels] section of your Subversion configuration file.
I've looked in the [tunnels] section of the config and nothing is currently enabled. It seems like you can specify how ssh gets called and I tried specifying a -v to ssh through this method but it seemed to have no effect. What I really want is -v output for ssh when SVN tries to connect. Although any additional logging would be good.
How do I get verbose ssh logging through SVN?
I am using SVN at the command line on linux.

If you are using *nix like systems or cygwin on windows, you can try with this method:
$ export SVN_SSH="ssh -v "
$ svn checkout svn+ssh://xyz

I just had the same issue. As described here, removing ~/.ssh/known_hosts resolved it for me.

Related

Pass password through jenkins build step

I'm trying to add a build step in jenkins to copy files from my build server to my web application server. I've got the following command working in the command prompt
sudo scp -r /var/lib/jenkins/workspace/demoproj/publish root#0.0.0.0:/usr/temp
but when I run this command, it prompts me for a password every time. I found out about sshpass, but when I run this command...
sudo sshpass -p "passwordhere" scp -r /var/lib/jenkins/workspace/demoproj/pub root#0.0.0.0:/usr/temp
the terminal gets stuck. And never makes it through.
My main problem is if I add the first command to a build step in jenkins, it won't be able to pass the password over. How can I either supply the password in jenkins, or modify the command to pass over my credentials?
Helpful information: I'm using Putty on Windows 10 to connect to my
Ubuntu 16.04.3 LTS x64 servers from another Ubuntu 16.04.3 server.
First, sshpass needs to be installed on both the systems that is, the one running your jenkins instance as well as the one you are trying to access that is: root#0.0.0.0. You can verify it by doing 'which sshpass' or 'whereis sshpass'. If its not installed even in one of them then you need to install it first.
Also, Have you ever tried doing a ssh to the said machine: root#0.0.0.0 from the system where you have your jenkins instance? If not then there might not be an entry in the 'known-hosts' of either system. for that you can do ssh with '-o StrictHostKeyChecking=no' option to make an automatic entry in known-hosts.
Alternatively, if you dont want to enter password again and again you should work with 'keys'. Generate a unique key for both the systems and do an scp or ssh with -i option.
You should use jenkins credentials instead of using sensitive passwords directly into the scripts. Put the whole scp or ssh part inside a block which looks like: withCredentials(){}.
What's the point of having CI if you are required to be nearby to enter password every time? Install "publish over ssh" plugin, it has a step to send stuff over ssh.
https://wiki.jenkins.io/display/JENKINS/Publish+Over+SSH+Plugin
Look at "Use SSH during a build" section, you can use "send files or execute commands over SSH" build step. This shall become available after plugin installation.

Shared Library issues when running over SSH (linux)

I am having some difficulty running jobs over SSH. I have a series of networked machines which all have access to the same Home folder (when my executable is installed). While working on one machine I would like to be able run my code through ssh using the following sort of command:
ssh -q ExecutableDir/MyExecutable InputDir/MyInput
If I ssh in to any of the machines I wish to run the job on remotely and simply run:
ExecutableDir/MyExecutable InputDir/MyInput
It runs without fail, however when I run through SSH I get an error saying some shared libraries can't be found. Has anyone come across this sort of thing before?
ok I figured it out myself.
It seems when you run things through ssh in the way shown above you don't inherit the path variables etc. that you would if you ssh-ed in 'properly'. You can see this by running:
ssh RemoteMachine printenv
and comparing the output to what you would normally get if you were connected to the remote machine. The solution I then went for was to run something like the following:
ssh -q ExecutableDir/MyExecutable source ~/.bash_profile && InputDir/MyInput
Which then gets all the paths and stuff you might need from the bash_profile file on the remote machine

SCP error: Bad configuration option: PermitLocalCommand

When I execute this command below:
scp -P 36000 hdfs#192.168.0.114:~/tmp.txt SOQ_log.txt
I get an error:
command-line: line 0: Bad configuration option: PermitLocalCommand
Does anyone know why?
scp runs a copy of the ssh program to create the communications channel, and it runs ssh with the options:
-oForwardAgent=no -oPermitLocalCommand=no -oClearAllForwardings=yes
So that explains where the "PermitLocalCommand" option is coming from in the first place. I'll add that sftp uses the same options to run ssh, so it'll probably display the same behavior.
"PermitLocalCommand" is normally a valid ssh configuration option. If your copy of ssh is complaining about it, then it seems that your copy of ssh isn't the normal copy of ssh that goes with your copy of scp.
This serverfault question suggests that the error could be due to someone installing a malware version of ssh (ie, a rootkit) on your system. This forum thread also suggests that the problem is due to having an altered version of ssh, which was fixed by removing and reinstalling the OpenSSH client utilities.
An alternate explanation would be that someone--maybe your Linux distro maintainer--has installed a version of ssh on your system with that option removed, and you're using it unawares. Or you have a very old version of the ssh program for some reason, which doesn't support the option.
My system is CentOs 5.9
I'm facing the same problem, I found it to be due to this configuration line in /etc/ssh/sshd_config:
# override default of no subsystems
Subsystem sftp /opt/libexec/sftp-server
But I cannot run /opt/libexec/sftp-server, it is broken for some reason
now it is solved by reinstall the remote openssh-server:
yum erase openssh-server
yum install openssh-server
now the changes to
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server
and /usr/libexec/openssh/sftp-server is runnable
don't forget to execute:
/etc/init.d/sshd restart
Sometimes command cannot parse this kind of stuff
:~/
Id change it to the full path.

SSH Agent no longer starting after installing Cygwin

Installed msysGit, wrote the code to start ssh-agent in .profile, everything worked.
Installed cygwin, without Git, just ssh and cURL.
SSH Agent no longer starting when Git bash opens.
I can start a new ssh-agent process instance, I see it when running ps in the Git bash, but when trying to use ssh-add, I get this error:
Could not open a connection to your authentication agent.
With cygwin, lots of .profile and .bashrc files were created in it's install folder (C:\cygwin). Not sure if this is the issue.
How can I fix this, please?
Take a look at my answer posted here for the ssh-add issue. Hopefully, this solution should work in your scenario too.

Command not found via ssh with single command, found after connecting to terminal [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why does an SSH remote command get fewer environment variables then when run manually?
If I put command
ssh user#$IP ant
then I receive
bash: ant: command not found
but when I log into
ssh user#$IP
and put
ant
then work fine.
Ant is installed on remote and local machines.
Where is the problem?
I've tried to find solution in google and found nothing.
Thanks in advance for help!
--EDIT--
I need to invoke some bash scripts, don't want to change all paths to full path.
By default profiles aren't loaded when connecting via ssh. To enable this behaviour, set the following option in /etc/ssh/sshd_config:
PermitUserEnvironment yes
afterward restart ssh
/etc/init.d/ssh restart
Specify the absolute path to ant, if I recall correctly your profile doesn't get run when you run a remote ssh command.

Resources