How to make git not ask for password at pull? - linux

I have the following setup:
A server (centOS) with git and a repository for a project on the same server.
What I need to do is to be able to pull from the repository without being asked for password (because is annoying).
Note: I am logged as root when I pull.
Can anyone help me with that?

There are a few options, depending on what your requirements are, in particular your security needs. For both HTTP and SSH, there is password-less, or password required access.
HTTP
==============
Password-Less
Useful for fetch only requirements, by default push is disabled. Perfect if anonymous cloning is the intention. You definitely shouldn't enable push for this type of configuration. The man page for git-http-backend contains good information, online copy at http://www.kernel.org/pub/software/scm/git/docs/git-http-backend.html. It provides an example of how to configure apache to provide this.
User/password in .netrc or url embedded
Where .netrc files are using in the form:
machine <hostname> login <username> password <password>
And embedded urls would be in the form:
http://user:pass#hostname/repo
Since git won't do auth for you, you will need to configure a webserver such as apache to perform the auth, before passing the request onto the git tools. Also keep in mind that using the embedded method is a security risk, even if you use https since it is part of the url being requested.
If you want to be able to pull non-interactive, but prevent anonymous users from accessing the git repo, this should be a reasonably lightweight solution using apache for basic auth and preferably the .netrc file to store credentials. As a small gotcha, git will enable write access once authentication is being used, so either use anonymous http for read-only, or you'll need to perform some additional configuration if you want to prevent the non-interactive user from having write access.
See:
httpd.apache.org/docs/2.4/mod/mod_auth_basic.html for more on configuring basic auth
www.kernel.org/pub/software/scm/git/docs/git-http-backend.html for some examples on the apache config needed.
SSH
==============
Passphrase-Less
Opens up for security issues, since anyone who can get a hold of the ssh private key can now update the remote git repo as this user. If you want to use this non-interactively, I'd recommend installing something like gitolite to make it a little easier to ensure that those with the ssh private key can only pull from the repo, and it requires a different ssh key pair to update the repo.
See github.com/sitaramc/gitolite/ for more on gitolite.
stromberg.dnsalias.org/~strombrg/ssh-keys.html - for creating password less ssh keys:
May also want to cover managing multiple ssh keys: www.kelvinwong.ca/2011/03/30/multiple-ssh-private-keys-identityfile/
Passphase protected
Can use ssh-agent to unlock on a per-session basis, only really useful for interactive fetching from git. Since you mention root and only talk about performing 'git pull', it sounds like your use case is non-interactive. This is something that might be better combined with gitolite (github.com/sitaramc/gitolite/).
Summary
==============
Using something like gitolite will abstract a lot of the configuration away for SSH type set ups, and is definitely recommended if you think you might have additional repositories or need to specify different levels of access. It's logging and auditing are also very useful.
If you just want to be able to pull via http, the git-http-backend man page should contain enough information to configure apache to do the needful.
You can always combine anonymous http(s) for clone/pull, with passphrase protected ssh access required for full access, in which case there is no need to set up gitolite, you'll just add the ssh public key to the ~/.ssh/authorized_keys file.

See the answer to this question. You should use the SSH access instead of HTTPS/GIT and authenticate via your SSH public key. This should also work locally.

If you're using ssh access, you should have ssh agent running, add your key there and register your public ssh key on the repo end. Your ssh key would then be used automatically. This is the preferred way.
If you're using https access, you one would either
use a .netrc file that contains the credentials or
provide user/pass in the target url in the form https://user:pass#domain.tld/repo
With any of these three ways, it shouldn't ask for a password.

Related

Read only access to svn repository via ssh (svn+ssh)

We desire to make subversion repositories read only. Doing this for a single repository in a subversion instance did not work regarding ssh. ssh access appears to bypass the controls of svn.
Followed the suggestions here:
Read-only access of Subversion repository
Write access should be restricted but that did not happen.
The repository is still write accessible despite changes to the repository for read only.
The easiest way to restrict access (assuming there are no users who require write access) is to remove the w (write) bit on the files in the SVN repo.
chmod -R gou-w /path/to/svn-repo
That will prevent writes at the filesystem / OS level.
If some users still require access, you can create separate svn+ssh endpoints for each user class that map to different users on the host server, using group write vs other write bits to determine which group has access to affect writes:
mkgrp writers-grp
chgrp -R writers-grp /path/to/svn-repo
chmod ug+w /path/to/svn-repo
chmod o-w /path/to/svn-repo
I would then register the SSH keys for writers against the writing user on the server, and prevent password access.
The "read-only" users could be allowed a well-known password.
This isn't as "clever" or "elegant" as configuring the SVN server configs, but it works pretty darned well as long as the users keep their SSH keys secret.
Restrict commit access with a start-commit hook.
Description
The start-commit hook is run before the commit transaction is even
created. It is typically used to decide whether the user has commit
privileges at all.
If the start-commit hook program returns a nonzero exit value, the
commit is stopped before the commit transaction is even created, and
anything printed to stderr is marshalled back to the client.
Input Parameter(s)
The command-line arguments passed to the hook program, in order, are:
Repository path
Authenticated username attempting the commit
Colon-separated list of capabilities that a client passes to the server, including depth, mergeinfo, and log-revprops (new in
Subversion 1.5).
Common uses
Access control (e.g., temporarily lock out commits for some reason).
A means to allow access only from clients that have certain
capabilities.

Store credentials for git commands using HTTP

I would like to store Git credentials for git pulls permenantly on a linux machine, and git credential.helper doesn't work ( I think because I'm not using SSH ) - I get that error "Fatal: could not read password for 'http://....': No such device or address". Given that I'm not the administrator of the repository and only HTTP is allowed for authentication, and fortunately I don't care about the safety of the password. What can I do to put the git pull command in a bash file and avoid prompting the user for password?
I hope there is a way around it.
Two things wrong with this question:
Most repositories such as GitHub require HTTPS. Even if you try to clone over
HTTP, it just switches it on the backend to HTTPS and pushes require it as
well.
Pulls don’t require a password, unless it’s a private repo. Like #1, since
you’ve given no info about your repo it’s hard to comment further on this.
Now, what I do is this:
git config --global credential.helper store
Then the first time you push it will ask for your credentials. Once you’ve
entered them they are stored in ~/.git-credentials. Note that they are stored
in plain text, you have been advised.
I'm assuming that your repository requires authentication for pulls, or else git wouldn't ask you for a password for the pull.
The recommended way to bypass the user password prompt is to create an SSH key on that machine, add the public key to the git server, then use the SSH url for the remote instead of the HTTP/S url. But since you specifically said:
I don't care about the safety of the password
you can actually just specify the password inline for the git pull like this:
git pull http://username:password#mygithost.com/my/repository

Different password for SSH and Session(KDE, Gnome, etc.)

I'm use an Debian based OS here on my work an i've configured the service for test routines of ERP app...
This service (Tomcat+Java service) it's consumed via HTTP on intranet correctly...but the test leader sometimes need chance the database used by service application and uses SSH to access my machine to change database on config file and restart the service...eventually this person change some service or O.S. config throwing problems to me (on my O.S and others things..).
What i want know is if can i change my password only for SSH service (doesn't change to my KDE/Gnome session), just because the company's policy requires everyone to have a default password on stations...
Remebering that i'm a manager of config, maintenance and others jobs of service to test team...and change database solicitations can made to me.
A simple example:
KDE login if user 'carlos' and password '123456'
SSH login if user 'carlos' and password '4nyJokeHere'
That it's possible ?
Thanks in advance.
Possible? Maybe. You'd probably have to fiddle with pam.d to get SSH authenticating via a different mechanism to KDE etc.
Coming from a different angle, I may be missing something, can you not create a second user for the SSH process, keeping your main user for KDE etc cleanly separate?
I'd really strongly recommend trying to "split" a user into multiple purposes/security groups with differing passwords for each!
You can use authorized_keys to restrict the SSH commands available, and/or sudo...
Update: Some expansion on the subject as requested by the OP
You can limit commands available via SSH by using ~/.ssh/authorised_keys file - see O'Reilly for a good explanation.
I'm was solved this case applying a single rule here. On SSH service i'm was locked access of my user 'carlos --> sudoers' and enable access only for a user called 'padrao' (padrao translated to english is 'default').
This user 'padrao' doesn't have sudoers permissions. If i needed access with SSH my machine i'm do:
ssh padrao#my.intranet.machine
password: ***
$ su carlos
password: ***
This is not the best way to solve, but solved my problem here.
Thanks.

Sourcetree on Mac connecting to Gitolite asks for authentication

We've recently set up Gitolite server. All seems well. I can connect to it without a problem.
A new user has been set up, he's on a Mac and trying to use SourceTree. The only way I could get him to connect was for him to attempt to ssh to the server and I typed in the password (exited afterwards). Without that the system kept asking for a password for that server.
Is this normal behaviour?
How do non-sysadmin users gain access to gitolite?
Gitolite is based on forced command, which means non-interactive session.
So:
no password should ever be entered (assuming here non-password protected private key).
(as detailed in "how gitolite uses ssh").
no "non-sysadmin" should ever gain access to gitolite server itself.
So all he should need is a public key stored in ~/.ssh (making sure both his home and .ssh aren't group or world writable), registered in gitolite-admin/keys and published on the gitolite server .ssh/authorized_keys file.
From there, as mentioned in "Sourcetree and Gitolite":
If you are cloning a remote git repository, you need to tab out of the Source path/ URL field to activate the clone button.
The url will be validated at that point.
The url needs no special syntax working with gitolite, and even respects the host entries in your ssh conf file. So in my case a url of gitolite:workrepo is sufficient.

Obscuring network proxy password in plain text files on Linux/UNIX-likes

Typically in a large network a computer needs to operate behind an authenticated proxy - any connections to the outside world require a username/password which is often the password a user uses to log into email, workstation etc.
This means having to put the network password in the apt.conf file as well as typically the http_proxy, ftp_proxy and https_proxy environment variables defined in ~/.profile
I realise that with apt.conf that you could set chmod 600 (which it isn't by default on Ubuntu/Debian!) but on our system there are people who need root priveleges .
I also realise that it is technically impossible to secure a password from someone who has root access, however I was wondering if there was a way of obscuring the password to prevent accidental discovery. Windows operates with users as admins yet somehow stores network passwords (probably stored deep in the registry obscured in some way) so that in typical use you won't stumble across it in plain text
I only ask since the other day, I entirely by accident discovered somebody elses password in this way when comparing configuration files across systems.
#monjardin - Public key authentication is not an alternative on this network I'm afraid. Plus I doubt it is supported amongst the majority of commandline tools.
#Neall - I don't mind the other users having web access, they can use my credentials to access the web, I just don't want them to happen across my password in plain text.
With the following approach you never have to save your proxy password in plain text. You just have to type in a password interactively as soon as you need http/https/ftp access:
Use openssl to encrypt your plain text proxy password into a file, with e.g. AES256 encryption:
openssl enc -aes-256-cbc -in pw.txt -out pw.bin
Use a (different) password for protecting the encoded file
Remove plain text pw.txt
Create an alias in e.g. ~/.alias to set your http_proxy/https_proxy/ftp_proxy environment variables (set appropriate values for $USER/proxy/$PORT)
alias myproxy='PW=`openssl aes-256-cbc -d -in pw.bin`; PROXY="http://$USER:$PW#proxy:$PORT"; export http_proxy=$PROXY; export https_proxy=$PROXY; export ftp_proxy=$PROXY'
you should source this file into your normal shell environment (on some systems this is done automatically)
type 'myproxy' and enter your openssl password you used for encrypting the file
done.
Note: the password is available (and readable) inside the users environment for the duration of the shell session. If you want to clean it from the environment after usage you can use another alias:
alias clearproxy='export http_proxy=; export https_proxy=; export
ftp_proxy='
I did a modified solution:
edit /etc/bash.bashrc and add following lines:
alias myproxy='read -p "Username: " USER;read -s -p "Password: " PW
PROXY="$USER:$PW#proxy.com:80";
export http_proxy=http://$PROXY;export Proxy=$http_proxy;export https_proxy=https://$PROXY;export ftp_proxy=ftp://$PROXY'
From next logon enter myproxy and input your user/password combination! Now work with sudo -E
-E, --preserve-env
Indicates to the security policy that the user wishes to reserve their
existing environment variables.
e.g. sudo -E apt-get update
Remark: proxy settings only valid during shell session
There are lots of ways to obscure a password: you could store the credentials in rot13 format, or BASE64, or use the same password-scrambling algorithm that CVS uses. The real trick though is making your applications aware of the scrambling algorithm.
For the environment variables in ~/.profile you could store them encoded and then decode them before setting the variables, e.g.:
encodedcreds="sbbone:cnffjbeq"
creds=`echo "$encodedcreds" | tr n-za-mN-ZA-M a-zA-Z`
That will set creds to foobar:password, which you can then embed in http_proxy etc.
I assume you know this, but it bears repeating: this doesn't add any security. It just protects against inadvertently seeing another user's password.
Prefer applications that integrate with Gnome Keyring. Another possibility is to use an SSH tunnel to an external machine and run apps through that. Take a look at the -D option for creating a local SOCKS proxy interface, rather than single-serving -L forwards.
Unless the specific tools you are using allow an obfuscated format, or you can create some sort of workflow to go from obfuscated to plain on demand, you are probably out of luck.
One thing I've seen in cases like this is creating per-server, per-user, or per-server/per-user dedicated credentials that only have access to the proxy from a specific IP. It doesn't solve your core obfuscation problem but it mitigates the effects of someone seeing the password because it's worth so little.
Regarding the latter option, we came up with a "reverse crypt" password encoding at work that we use for stuff like this. It's only obfuscation because all the data needed to decode the pw is stored in the encoded string, but it prevents people from accidentally seeing passwords in plain text. So you might, for instance, store one of the above passwords in this format, and then write a wrapper for apt that builds apt.conf dynamically, calls the real apt, and at exit deletes apt.conf. You still end up with the pw in plaintext for a little while, but it minimizes the window.
Is public key authentication a valid alternative for you?
As long as all three of these things are true, you're out of luck:
Server needs web access
Users need absolute control over server (root)
You don't want users to have server's web access
If you can't remove #2 or #3, your only choice is to remove #1. Set up an internal server that hosts all the software updates. Keep that one locked down from your other users and don't allow other servers to have web access.
Anything else you try to do is just fooling yourself.
we solved this problem by not asking for proxy passwords on rpm, apt or other similar updates (virus databases, windows stuff etc)
That's a small whitelist of known repositories to add to the proxy.
I suppose you could create a local proxy, point these tools through that, and then have the local proxy interactively ask the user for the external proxy password which it would then apply. It could optionally remember this for a few minutes in obfuscated internal storage.
An obvious attack vector would be for a privileged user to modify this local proxy to do something else with the entered password (as they could with anything else such as an email client that requests it or the windowing system itself), but at least you'd be safe from inadvertent viewing.

Resources