I'm trying to trigger an executable file 'post-receive', after pushing some changes to a git repo on a remote machine. Within this file are some commands that require elevated privileges, such as:
sudo -S rm -f $HOME/.build
sudo -S rm -f $HOME/Packages
I've added a remote to my local repo:
git remote add live ssh://dev#ip/home/dev/app/.git
So I can push changes to my remote repo, like this:
git push live master
The 'post-receive' file executes, whenever I push.
However, a password is requested for sudo commands within the 'post-receive' file.
remote: [sudo] password for dev: Sorry, try again.
remote: [sudo] password for dev:
remote: sudo: 1 incorrect password attempt
remote: [sudo] password for dev:
An unexpected event, had I not configured my access trough ssh keys and specified my identity file.
Locally I have setup my SSH keys:
~/.ssh/id_rsa
~/.ssh/id_rsa.pub
Then, I've copied the local '~/.ssh/id_rsa.pub' file contents into the remote '~/.ssh/authorized_keys' file.
I've also setup a 'config' file, locally, specifying the location of my identity:
HostName ip
IdentityFile ~/.ssh/id_rsa
At this point, I'm able to ssh into the remote machine, without any passwords, like this:
ssh dev#ip
This was already expected, however, when pushing changes to my remote repo:
git push live master
...it asks me for a password when running the remote 'post-receive' file.
Why am I asked for this password?
What step am I not seeing clearly?
Running:
OS X El Capitan locally
Ubuntu 16.04.1 LTS remotely
Following the Digital Ocean Deployment Tutorial
This has nothing to do with GIT or SSH. Linux distributions by default require any user running a sudo command, even if they have permissions, to enter the password. This can be overridden (see below).
The step to override this :)
Check this answer for example.
You need to add a NOPASSWD directive in your sudoers file for the relevant user. Modified from that answer:
dev ALL = NOPASSWD: ALL
You could replace ALL with a specific command for safety.
Related
Everything was working fine but suddenly I am getting the error:
fatal: unable to access
'https://username#bitbucket.org/name/repo_name.git/':
gnutls_handshake() failed: Handshake failed
I am getting this on my computer as well as an EC2 instance. When I tried on another computer then it is working fine there.
I have tried many solutions from Stackoverflow and from other forums. but nothing worked!
On the computer, os is Linux mint 17 and on EC2 instance, Ubuntu 14.04.6 LTS.
What can be the issue and what should I do to fix this issue?
Ran into the same issue on a server with Ubuntu 14.04, and found that on Aug 24, 2020 bitbucket.org changed to no longer allow old ciphers, see https://bitbucket.org/blog/update-to-supported-cipher-suites-in-bitbucket-cloud
This affects https:// connections to bitbucket, but does not affect ssh connections, so the quickest solution for me was to add an ssh key to bitbucket, and then change the remote from https to ssh.
The steps to change the remote I found from here, and they are essentially:
# Find the current remote
git remote -v
origin https://user#bitbucket.org/reponame.git (fetch)
origin https://user#bitbucket.org/reponame.git (push)
# Change the remote to ssh
git remote set-url origin git#bitbucket.org:reponame.git
# Check the remote again to make sure it changed
git remote -v
There is more discussion about the issue on the Atlassian forums at https://community.atlassian.com/t5/Bitbucket-questions/fatal-unable-to-access-https-bitbucket-org-gnutls-handshake/qaq-p/1468075
The quickest solution is to use SSH instead of HTTPS. I tried other ways to fix the issue but it was not working.
The following are steps to replace HTTPS from SSH:
Generate ssh key using ssh-keygen on the server.
Copy the public key from the generated id_rsa.pub file from step 1 and add it at following links depending on the repository host -
Bitbucket - https://bitbucket.org/account/settings/ssh-keys/
Github - https://github.com/settings/ssh/new
Gitlab - https://gitlab.com/profile/keys
Now run the following command to test authentication from the server command line terminal
Bitbucket
ssh -T git#bitbucket.org
Github
ssh -T git#github.com
Gitlab
ssh -T git#gitlab.com
Go to the repo directory and open .git/config file using emac or vi or nano
Replace remote "origin" URL (which starts with https) with the following -
For Bitbucket - git#bitbucket.org:<username>/<repo>.git
For Github - git#github.com:<username>/<repo>.git
For Gitlab - git#gitlab.com:<username>/<repo>.git
sudo bash
mkdir upgrade
cd upgrade
wget https://www.openssl.org/source/openssl-1.1.1g.tar.gz
tar xpvfz openssl-1.1.1g.tar.gz
cd openssl-1.1.1g
./Configure
make ; make install
cd ..
wget https://curl.haxx.se/download/curl-7.72.0.tar.gz
tar xpvfz curl-7.72.0.tar.gz
cd curl.7.72.0
./configure --with-ssl=/usr/local/ssl
make ; make install
cd ..
git clone https://github.com/git/git
cd git
vi Makefile, change prefix= line to /usr instead of home
make ; make install
I have several remote machines that need to pull from a repo after I've completed testing and ready to make updates to production (python Flask app and supporting classes). A couple of the machines need to pull from a different branch, as well. I've been SSHing to each machine to run the git pull, but this is getting annoying and time consuming.
I'm trying to run an ssh command that completes a git pull. This is what I've tried:
ssh dev#<remote IP> "cd /home/dev/<repo> && git pull"
And I'm getting a
Permission denied (publickey).
fatal: Could not read from remote repository.
I'm able to run other git commands just fine that don't interact with remote origin. Such as:
ssh dev#<remote IP> "cd /home/dev/<repo> && git remote -v"
When I actually ssh on to the remote machine. I have no problem navigating to the directory and running a git pull.
I also made sure that I added the ssh key to an ssh-agent so that password prompts on the key wouldn't be an issue.
Thought it could potentially be a key permissions issue, so I double checked that the key is readable by the user I'm running the command as.
It's frustrating that I am able to ssh on to the remote machine and run the pull just fine, but cannot run the command with the format above.
Thanks a ton for any help!
Use the -A option.
ssh -A dev#<remote IP> "cd /home/dev/<repo> && git pull"
I ran across the option in a comment here when trying to find the answer to this problem: https://serverfault.com/questions/762983/ssh-and-git-pull-from-remote-server
From https://linux.die.net/man/1/ssh:
If the ForwardAgent variable is set to ''yes'' (or see the description of the -A and -a options above) and the user is using an authentication agent, the connection to the agent is automatically forwarded to the remote side.
From what I understood with your issue, here is my suggestion :
[ Information is somewhat incomplete though ]
GIT reads your id_rsa.pub in root user directory : /home/root/.ssh/id_rsa.pub
That's why your key in /home/your_username/.ssh/id_rsa.pub might not be read by git.
Hence, please check and create the key in /home/root/.ssh/
$ sudo su
$ ssh-keygen
$ cd ~/.ssh
$ cat id_rsa.pub
Hope it helps.
Error:
Failed to connect to repository : Command "/usr/bin/git ls-remote -h file:///home/myuser/path/to/project HEAD" returned status code 128:
stdout:
stderr: fatal: 'home/myuser/path/to/project' does not appear to be a git repository
fatal: The remote end hung up unexpectedly
I have tried the following:
chmod 777 to the repo folder(folder containing .git directory)
chowned to jenkins:jenkins on the repo folder
tried to clone into another folder from this local repo folder: this works!
When I run the above command: /usr/bin/git ls-remote -h file:///home/myuser/path/to/project HEAD on cmd I get the branches.
My questions are:
why is git ls-remote -h ... command called when it should be git clone ...?
How to configure jenkins git plugin to fetch code from local repo
My environment:
RHEL 5.9
Jenkins 1.519 installed as a service(no Web container)
Git plugin
When installing Jenkins as a service, by default, Jenkins does not create a user directory as in: /home/jenkins. Jenkins default home directory is set to /var/lib/jenkins. From my work-around, as you would expect, jenkins has trouble accessing local resources from other users directory.
I moved my cloned repo under Jenkins default home directory i.e. under /var/lib/jenkins so my Repository URLin Jenkins Project configuration looks like: file:///${JENKINS_HOME}/repo/<myprojectname>
UPDATE:
The above works fine ...but I found a better way to do it from this blog
The steps are outlined here:
look up /etc/init.d/jenkins script. There are a few $JENKINS variables defined
. This should lead you to the sysconfig for jenkins i.e. /etc/sysconfig/jenkins.
Stop your jenkins instance:
sudo /sbin/service jenkins stop
Take a backup
cp /etc/sysconfig/jenkins /etc/sysconfig/jenkins.bak
In this file, change the following property:
$JENKINS_USER="<your desired user>"
Change ownership of all related Jenkins directories:
chown -R <your desired user>:<your user group> /var/lib/jenkins
chown -R <your desired user>:<your user group> /var/cache/jenkins
chown -R <your desired user>:<your user group> /var/log/jenkins
Restart jenkins and that error should disappear
sudo /sbin/service jenkins start
This error should go away now!
It's been a while since this question was asked, but I had this problem today and there are very few resources. Most probably, because people tend to connect to git repositories remotely.
I checked using strace what exactly jenkins was doing and yes, it was a problem with permissions.
But I solved it in a simpler way than answer #2 - by adding jenkins to the git server group - in my case, git1:
root# gpasswd -a jenkins git1
root# service jenkins restart
I'm running Jenkins on Windows and had the same problem. I was able to solve this by having the Jenkins service log in as my user on my laptop.
(Windows 7)
Open Task Manager (Ctrl + Shift + Escape)
(Windows 10 only) Click on More Details in the lower left corner of the pop up window
Go to the Services tab
Click the Services... button
Find "Jenkins" in the list of services
Right-click "Jenkins" and click on Properties
Click the Log On tab in the Jenkins Properties window
Choose This account: under Log on as:
Enter your username and password
Click OK
Restart the Jenkins service
Then Bob's your uncle.
Jenkins uses git clone command only for the first time when a workspace is configured for a project. Further instances uses the git ls-remote command.
I had the same issue when I configured Jenkins. It was resolved by playing around with the SSH Keys. This looks like a configuration issue as well. Check if SSH Keys are setup for the Jenkins account.
Also, see the step by step procedure of configuration of SSH in the link provided. This might not give you exact solution, but can point you to the solution.
http://oodlestechnologies.com/blogs/How-to-setup-Jenkins-With-Grails-on-Ubuntu
I find that the other solutions are a bit "hacky" for me. What I did was move the Jenkins Home folder from /Users/Shared/ to /Users/[myacccount]/. This way, my Jenkins will have access to my repos and to my Android SDK (because that's where I use Jenkins for). Then change the JENKINS_HOME environment variable. I did this by entering the JENKINS_HOME in my .bash_profile (but there are other ways to do this).
Note: I use OSX
Instead of file:/// you can also use ssh:// as in this answer:
ssh://YOUR_USER#localhost/PATH_TO_YOUR_PROJECT
Note that you need to do the standard ssh setup:
Generate a keypair using ssh-keygen if you don't already have one in ~/.ssh
Paste private key (default ~/.ssh/id_rsa) into Jenkins (project settings, git repo, credentials)
Paste public key into ~/.ssh/authorized_keys
I have done the following steps to setup ssh deployment keys with our git repo for it to be able to git pull without a username and password:
Note: I am on AWS EC2 / Ubuntu 14.04.3
Run ssh-keygen -t rsa -b 4096 -C "ownersEmail#gmail.com" these are then saved as id_rsa and id_rsa.pub in ~/.ssh/
The deployment public key (id_rsa.pub) is added on the GitHub online UI in the deployment keys section
The directory is already cloned in /var/www/ directory, this is working all good via HTTPS for pulling
Try sudo git pull git#github.com:ownersUsername/OurRepo.git and get the following error
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Another Note: This repository is private under another users account.
Also, when I try ssh git#github.com I get:
Hi userName/Repo! You've successfully authenticated, but GitHub does not provide shell access.
Connection to github.com closed.
And the deployment key comes up as being used. Have been on this issue for greater than 4 hours now and any would would be very much appreciated, thanks.
The problem is you're using sudo, which runs the command as root, and it will try to use the root's keys not your user's keys.
What you want to do is:
give your user/group write access to /var/www
run the pull/clone as the user, not the root user.
When you do a git pull you don't need the link.
git pull <remote> <branch>
You need the full url for the clone command
sudo git clone git#github.com:ownersUsername/OurRepo.git
To test if your ssh key is good use this:
git fetch --all --prune
I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"