I'm in aws EMR,
Master:172.31.1.172
slave1:172.31.1.11
slave2:172.31.1.245
I have configured /etc/hosts:
Then ${SPARKPATH}/conf/spark-env.sh file:
Then ${SPARKPATH}/conf/slaves.template file:
And ${SPARKPATH}/conf/workers file
I have configured ssh to allow master direct connection without any password at:
root#slave1
root#slave2
However when I'm trying ./start-slaves.sh, I have the output below:
In start_all.sh start-master.sh run without problem, but not for start-slaves.sh.
It seems to be mind by the ssh connection, yet I can run ssh root#slave1 and it works.
Just have to follow these steps:
-Connection to super user with
su
Then:
ssh-keygen -t rsa
It will create id_rsa.pub in /root/.ssh/
Then, copy /root/.ssh/id_rsa.pub to authorized_hosts of your localhost:
cat /root/.ssh/id_rsa.pub >> ~/.ssh/authorized_hosts
if ssh localhost is passwordless you have succeeded
Related
I have some Jenkins jobs defined using a Jenkins Pipeline Model Definition, which builds NPM projects. I use Docker containers to build these projects (using a common image with
just Node.js + npm + yarn).
The results of the builds are contained in the dist/ folder that I zipped using a zip pipeline command.
I want to copy this ZIP file to another server using SSH/SCP (with private key authentication). My private key is added to the Jenkins environment (credentials manager), but when I use Docker containers, an SSH connection cannot be established.
I tried to add agent { label 'master' } to use the master Jenkins node for file transfer, but it seems to create a clean workspace with new Git fetch, and without my built files.
After I tried the SSH Agent Plugin, I have this output:
Identity added: /srv/jenkins3/workspace/myjob-TFD#tmp/private_key_370451445598243031.key (rsa w/o comment)
[ssh-agent] Started.
[myjob-TFD] Running shell script
+ scp -r dist test#myremotehost:/var/www/xxx
$ docker exec bfda17664965b14281eef8670b34f83e0ff60218b04cfa56ba3c0ab23d94d035 env SSH_AGENT_PID=1424 SSH_AUTH_SOCK=/tmp/ssh-k658r0O76Yqb/agent.1419 ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 1424 killed;
[ssh-agent] Stopped.
Host key verification failed.
lost connection
How do I add a remote host as authorized?
I had a similar issue. I did not use the label 'master', and I identified that the file transfer works across slaves when I do it like this:
Step 1 - create SSH keys in a remote host server, include the key to authorized_keys
Step 2 - Create credential using SSH keys in Jenkins, use the private key from the remote host
Use the SSH agent plugin:
stage ('Deploy') {
steps{
sshagent(credentials : ['use-the-id-from-credential-generated-by-jenkins']) {
sh 'ssh -o StrictHostKeyChecking=no user#hostname.com uptime'
sh 'ssh -v user#hostname.com'
sh 'scp ./source/filename user#hostname.com:/remotehost/target'
}
}
}
Use the SSH agent plugin:
SSH Agent Plugin
SSH Agent Plugin
When using this plugin you can use the global credentials.
To add a remote host to known hosts and hopefully cope with your error try to manually ssh from the Jenkins host to the target host as the Jenkins user.
Get on the host where Jenkins is installed. Type
sudo su jenkins
Now use ssh or scp like
ssh username#server
You should be prompted like this:
The authenticity of host 'server (ip)' can't be established.
ECDSA key fingerprint is SHA256:some-weird-string.
Are you sure you want to continue connecting (yes/no)?
Type yes. The server will be permanently added as a known host. Don't even bother passing a password, just Ctrl + C and try running a Jenkins job.
Like #haschibaschi recommends, I also use the ssh-agent plugin. I have a need to use my personal UID credentials on the remote machine, because it doesn't have any UID Jenkins account. The code looks like this (using, for example, my personal UID="myuid" and remote server hostname="non_jenkins_svr":
sshagent(['e4fbd939-914a-41ed-92d9-8eededfb9243']) {
// 'myuid#' required for scp (this is from UID jenkins to UID myuid)
sh "scp $WORKSPACE/example.txt myuid#non_jenkins_svr:${dest_dir}"
}
The ID e4fbd939-914a-41ed-92d9-8eededfb9243 was generated by the Jenkins credentials manager after I created a global domain credentials entry.
After creating the credentials entry, the ID is found under the "ID" column on the credentials page. When creating the entry, I selected type 'SSH Username with private key' ('Kind' field), and copied the RSA private key I had created for this purpose under the myuid account on host non_jenkins_svr without a passphrase.
I am new to ansible. Tried to configure it in amazon linux instance to learn basic things about ansible. After ansible installation i have created a ssh key pair using command ssh-keygen. Once it is generated I tried to run the command " ssh-copy-id localhost" but it ended with below error,
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Permission denied (publickey).
Could someone guide me to resolve this.
ssh-copy-id localhost
won't work if you don't have password authentication enabled in the ssh server on localhost.
If you need to set up pubkey authentication without allowing password authentication, just copy the public key locally (since it is localhost):
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/authorized_keys
# make sure the SELinux labels are correct:
type restorecon && restorecon -F .ssh .ssh/authorized_keys
I'm trying to trigger an executable file 'post-receive', after pushing some changes to a git repo on a remote machine. Within this file are some commands that require elevated privileges, such as:
sudo -S rm -f $HOME/.build
sudo -S rm -f $HOME/Packages
I've added a remote to my local repo:
git remote add live ssh://dev#ip/home/dev/app/.git
So I can push changes to my remote repo, like this:
git push live master
The 'post-receive' file executes, whenever I push.
However, a password is requested for sudo commands within the 'post-receive' file.
remote: [sudo] password for dev: Sorry, try again.
remote: [sudo] password for dev:
remote: sudo: 1 incorrect password attempt
remote: [sudo] password for dev:
An unexpected event, had I not configured my access trough ssh keys and specified my identity file.
Locally I have setup my SSH keys:
~/.ssh/id_rsa
~/.ssh/id_rsa.pub
Then, I've copied the local '~/.ssh/id_rsa.pub' file contents into the remote '~/.ssh/authorized_keys' file.
I've also setup a 'config' file, locally, specifying the location of my identity:
HostName ip
IdentityFile ~/.ssh/id_rsa
At this point, I'm able to ssh into the remote machine, without any passwords, like this:
ssh dev#ip
This was already expected, however, when pushing changes to my remote repo:
git push live master
...it asks me for a password when running the remote 'post-receive' file.
Why am I asked for this password?
What step am I not seeing clearly?
Running:
OS X El Capitan locally
Ubuntu 16.04.1 LTS remotely
Following the Digital Ocean Deployment Tutorial
This has nothing to do with GIT or SSH. Linux distributions by default require any user running a sudo command, even if they have permissions, to enter the password. This can be overridden (see below).
The step to override this :)
Check this answer for example.
You need to add a NOPASSWD directive in your sudoers file for the relevant user. Modified from that answer:
dev ALL = NOPASSWD: ALL
You could replace ALL with a specific command for safety.
I have several remote machines that need to pull from a repo after I've completed testing and ready to make updates to production (python Flask app and supporting classes). A couple of the machines need to pull from a different branch, as well. I've been SSHing to each machine to run the git pull, but this is getting annoying and time consuming.
I'm trying to run an ssh command that completes a git pull. This is what I've tried:
ssh dev#<remote IP> "cd /home/dev/<repo> && git pull"
And I'm getting a
Permission denied (publickey).
fatal: Could not read from remote repository.
I'm able to run other git commands just fine that don't interact with remote origin. Such as:
ssh dev#<remote IP> "cd /home/dev/<repo> && git remote -v"
When I actually ssh on to the remote machine. I have no problem navigating to the directory and running a git pull.
I also made sure that I added the ssh key to an ssh-agent so that password prompts on the key wouldn't be an issue.
Thought it could potentially be a key permissions issue, so I double checked that the key is readable by the user I'm running the command as.
It's frustrating that I am able to ssh on to the remote machine and run the pull just fine, but cannot run the command with the format above.
Thanks a ton for any help!
Use the -A option.
ssh -A dev#<remote IP> "cd /home/dev/<repo> && git pull"
I ran across the option in a comment here when trying to find the answer to this problem: https://serverfault.com/questions/762983/ssh-and-git-pull-from-remote-server
From https://linux.die.net/man/1/ssh:
If the ForwardAgent variable is set to ''yes'' (or see the description of the -A and -a options above) and the user is using an authentication agent, the connection to the agent is automatically forwarded to the remote side.
From what I understood with your issue, here is my suggestion :
[ Information is somewhat incomplete though ]
GIT reads your id_rsa.pub in root user directory : /home/root/.ssh/id_rsa.pub
That's why your key in /home/your_username/.ssh/id_rsa.pub might not be read by git.
Hence, please check and create the key in /home/root/.ssh/
$ sudo su
$ ssh-keygen
$ cd ~/.ssh
$ cat id_rsa.pub
Hope it helps.
I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.