Docker no basic auth credentials after succesfull login - linux

I've moved to linux (pop_os 21.04) on my desktop and I'm having some issues with docker.
When I'm trying to run docker-compose to pull an image from a private registry I'm getting:
ERROR: Head "https://my.registry/my-image/manifests/latest": no basic auth credentials
Of course before running this command I've ran:
docker login https://my.registry.com -u user -p pass
which returns
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
And my config.json in my .docker folder show my credentials
{
"auths": {
"my.registry.com": {
"auth": "XXXXX"
}
}
}
To install docker I've followed instructions on their page https://docs.docker.com/engine/install/ubuntu/
And my version is:
Docker version 20.10.8, build 3967b7d
The same command ran on a macos system with Docker version 20.10.8 runs without any issues so I my password and all the urls are correct for sure.
Thanks for any help!

The login commands is
docker login my.registry.com
Without the https:// in front of the host. If you still have auth issues doing that:
if the registry uses an unknown TLS certificate, load that certificate on the host and restart the docker engine
if the registry is http instead of https, configure it as an insecure registry on /etc/docker/daemon.conf
if the login is successful, but the pull fails, verify your user has access to the specific repo on the registry
double check your password was correctly entered
check for a network proxy intercepting the request (the http_proxy variable)

I reinstalled the whole thing again as the docker page states, didn't work, so I uninstalled it and proceeded to install snap version, that didn't work neither and finally I removed it and went with simple apt-get install docker.io and it works like a charm! I don't know why it didn't work previously but I won't lose more sleep over it.

On Ubuntu 20.x, I observed that the credentials are stored in home/<username>/snap/docker/1125/.docker/config.json.
If older credentials are stored in $HOME/.docker/config.json, they are not used by docker pull. Verify if docker is indeed picking up the credentials from the right config.json location.

Related

Jenkins Error 128 / Git Error 403: Jenkins can't connect to my Bitbucket repository

OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.

Cannot Create Admin Login on CouchDB

I have a fresh install of CouchDB on a new server. I set it up on a dev server and upon starting the service and accessing the web interface I was able to click the fix it button and create an admin login. On the new server using the exact same steps and software when clicking fix it and entering the new username and password it just spins and keeps thinking and does nothing else. If I refresh the screen it just starts over with no visible change. Anyone know where to look to see what the issue is or know why this is happening. I am fairly new to CouchDB.
Note: I am using the Fix link in the lower right side menu to create the login, it worked before on another server
I followed this article, see section on creating admin using Fix It
https://www.digitalocean.com/community/tutorials/how-to-install-couchdb-and-futon-on-ubuntu-14-04
You can try to add the admin with curl. If curl isn't installed on your machine, install it with this simple command:
apt-get update && apt-get install curl
then execute the following curl command:
$1: Username
$2: Userpassword
curl -X PUT $HOST/_config/admins/$1 -d '"'$2'"'
Source for further information about that topic: http://docs.couchdb.org/en/1.6.1/intro/security.html

How to allow jenkins from local machine to run remote python test scripts

I have a jenkins running on my local centos machine.
I have configured my local jenkins and was able to run a successful local build .
Now, i want to run remote tests which are python scripts on a remote centos machine which is not having jenkins installed. also, i dont want to install any jenkins process on the remote linux system as it is "like a" production server and am advised not to install any apps on it.
How do i use my local jenkins to run a build to execute those remote tests and report/output on my local jenkins console.
Do i need to use jenkins master-slave architecture ? if yes, how do i configure that given my above requirement.
You might want to have a look at this:
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
for you req, precisely this part:
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-Launchslaveagentheadlessly
However, i believe you still have to have java on your slave unix node to run the slave.jar on it
This answer is assuming the scripts are in GitHub. May it helps to think in your case.
So.. First you need to install Git in you server machine by:
$ sudo apt-get update
$ sudo apt-get install git
Now you need to get the path of Git by $ which git
it will give like "/usr/local/bin/git"
copy that path into ManageJenkins->Global Tool Configuration-> in the git section, paste into "Path to Git executable".
it will allows you to access git sources.
Now you need to provive SSH keys.
Type sudo su- jenkins in you remote machine.You have to generate ssh key for "jenkins" user.
Now add public key to GitHub account(You can see https://www.youtube.com/watch?v=Vi-WqFKYpnw).
and add the private key to Jenkins by
Go to Credentials
Click in Global in Stores scoped
Add Credentials
Kind: SSH Username with private key
Username: your server username
Private Key: give the private key of user "Jenkins"
Specify ID as "jenkins-private-key" or anything else to identify
Now
Go to job configuration->select credentials that you have created and
Copy the ssh url of repository(Where you scripts are stored) Now you can run the scripts which are stored in Git.

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

Can't execute git command in nodejs

I can successfully execute git pull in linux command line on my VPS, but when I execute a bash file containing "git pull" with execFile in Nodejs, it gave me an error: Command failed: Host key verification failed. How can I solve this problem?
Update:
The whole error message I get is:
{ [Error: Command failed: Host key verification failed. fatal: Could not read
from remote repository. Please make sure you have the correct access rights
and the repository exists. ] killed: false, code: 1, signal: null }
It seems that it's not the same problem with the question dylants provided.
The bash file script is like this, I use it to auto deploy my nodejs app:
git pull && pm2 reload www
I am using ssh protocol instead of https protocol on my vps in order to prevent the password prompt each time I git pull from my bitbucket repository. So ssh keys were generated in my user directory ~/.ssh/. I think the reason why nodejs failed to execute the bash file is this: The user who run the bash file in nodejs app is different from the user who run the bash file in command line. so the user running nodejs can't use the ssh keys located in ~/.ssh for verification.
Is that right? How to fix it?
I think you have correctly identified the problem: the nodejs application does not have access to your ssh credentials. You have a few options available:
If you can make the repository available for anonymous read-only access via http:// or git:// protocols, you can have the nodejs pull changes without requiring any sort of credentials.
You can generate an ssh key for the nodejs user and grant that user read-only access to the repository. You would just need to generate an ssh keypair in the appropriate location for that user.
You could drop your own credentials where your nodejs app can make use of them, but this has a number of security problems -- if your webapp is compromised, the attacker can write changes to your repository that will appear to come from you. So don't use this option.

Resources