GitLab - Git pull command with username and password in CI - gitlab

I have configured my CI pipeline to remote via SSH into a server and update the git repository there with the latest changes. However it fails. Because when the CI runs the git pull command, the response is : "could not read Username for 'https://gitlab.com' "
Is there a way to run git pull which includes username and password in one single command?
I know the best way is to add the SSH key into gitlab variables to avoid asking for username and password, but that didn't work for me either. so my only option would be to provide username and password.
My gitlab-ci.yml file is like below:
deploy_staging:
stage: deploy
before_script:
- mkdir -p ~/.ssh
- echo -e "$DEPLOY_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- ssh ubuntu#example.com "
cd my_Project &&
git pull
"

Your order is incorrect. You add the ssh deploy key to the environment executing your CI job when you should add it inside the example.com environment executing the pull command.
It's like preparing a room with a chair and table and then entering the next room and expect you can sit down on the chair from the other room.
You should follow the instructions for Deploy keys and add the public key from the ubuntu#example.com user to your project deploy keys.
This should be all to make the last bit work.
Note:
Your comment about: echo -e "$DEPLOY_KEY" > ~/.ssh/id_rsa; ssh ubuntu#example.com" cd myProject && git pull doesn't change the order, you still add the deploy_key to your current environment and then enter another one through ssh.

I tried very hard by following every single step that is mentioned by gitlab official guide here as well as #Stefan's suggestion, and still I wasn't able to run the git pull command on the remote server.
But I could connect to the remote server via SSH.
So, I end up storing the gitlab credential on the server to avoid interrupting the CI execution.
$ git config credential.helper store
$ git push http://example.com/repo.git
Username: <username>
Password: <password>

Simply use the below command to pull repo without credentials on GitLab CI
git pull ${CI_REPOSITORY_URL}
CI_REPOSITORY_URL -- is GitLab predefined environment url

Related

How to run ansible playbook from github actions - without using external action

I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook.
ssh -t -v theUser#theHost shows me that the SSH connection works.
The ansible sript however tells me, that the sudo Password is missing.
If I leave the line ssh -t -v theUser#theHost out, ansible throws a connection timeout and cant connect to the server.
=> fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out
First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser#theHost.
The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively.
It simply doesn't work form my GithHub Action.
Can you please tell me what I am doing wrong?
My workflow file looks like this:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Run Ansible Playbook
run: |
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/config
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config
ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts
cd myproject-infrastructure/ansible
eval `ssh-agent -s`
chmod 700 /home/runner/.ssh/id_rsa
ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Finally found it
First basic setup of the action itself.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Next add a job to run and checkout the repository in the first step.
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
Next set up ssh correctly.
- name: Setup ssh
shell: bash
run: |
service ssh status
eval `ssh-agent -s`
First of all you want to be sure that the ssh service is running. The ssh service was already running in my case.
However when I experimented with Docker I had to start the service manually at the first place like service ssh start. Next be sure that the .shh folder exists for your user and copy your private key to that folder. I have added a github secret to my repository where I saved my private key. In my case it is the runner user.
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
Make sure that your private key is protected. If not the ssh service wont accept working with it. To do so:
chmod 700 /home/runner/.ssh/id_rsa
Normally when you start a ssh connection you are asked if you want to save the host permanently as a known host. As we are running automatically we can't type in yes. If you don't answer the process will fail.
You have to prevent the process being interrupted by the prompt. To do so you add the host to the known_hosts file by yourself. You use ssh-keyscan for that. Unfortunately ssh-keyscan can produce output in differeny formats/types.
Simply using ssh-keyscan was not enough in my case. I had to add other type options to the command. The generated output has to be written to the known_hosts file in the .ssh folder of your user. In my case /home/runner/.ssh/knwon_hosts
So the next command is:
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
Now you are almost there. Just call the ansible playbook command to run the ansible script. I ceated a new step where I changed the directory to the folder in my repository where my ansible files are saved.
- name: Run ansible script
shell: bash
run: |
cd infrastructure/ansible
ansible-playbook --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
The complete file:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Setup SSH
shell: bash
run: |
eval `ssh-agent -s`
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
chmod 700 /home/runner/.ssh/id_rsa
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
- name: Run ansible script
shell: bash
run: |
service ssh status
cd infrastructure/ansible
cat setup-prod.yml
ansible-playbook -vvv --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Next enjoy...
An alternative, without explaining why you have those errors, is to test and use actions/run-ansible-playbook to run your playbook.
That way, you can test if the "sudo Password is missing" is missing in that configuration.
- name: Run playbook
uses: dawidd6/action-ansible-playbook#v2
with:
# Required, playbook filepath
playbook: deploy.yml
# Optional, directory where playbooks live
directory: ./
# Optional, SSH private key
key: ${{secrets.SSH_PRIVATE_KEY}}
# Optional, literal inventory file contents
inventory: |
[all]
example.com
[group1]
example.com
# Optional, SSH known hosts file content
known_hosts: .known_hosts
# Optional, encrypted vault password
vault_password: ${{secrets.VAULT_PASSWORD}}
# Optional, galaxy requirements filepath
requirements: galaxy-requirements.yml
# Optional, additional flags to pass to ansible-playbook
options: |
--inventory .hosts
--limit group1
--extra-vars hello=there
--verbose

fatal: could not read Username for 'https://gitlab.com': No such device or address on gitlab

I am Try to push code with automate deploy and pull process to the production server, but I got an error in the pipeline build process like this
fatal: could not read Username for 'https://gitlab.com': No such device or address
here is a .gitlab-ci.yml
script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- echo "$PRIVATE_KEY_STAGING" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh -p22 ec2-user#$SERVER_STAGING "uname -a"
How to Login to My Server and pull my updated code using Gitlab CI/CD?
Thanks in advance
Since you got it resolved, how about to describe better here what have you done?
It may be more usefull to the community rather then a simple
"solved, I use personal Access Token to pull my private repo"
😉

git push using crontab every hour prompting password

I have created a shell script file for pushing a git repository automatically every hour using crontab like as follows,
backup.sh
cd /home/user/share/my_project && git commit -a -m "hourli crontab backup 'date'"
cd /home/user/share/my_project && git push origin branch1
send mypassword\r
wait
The issue with this code is, the git is using ssh and every time we run this code using bash ./backup.sh is asking password in the terminal. Not accepting the password specified in the shell script.
I agree with the comments that this isn't a best practice.
Ignoring that and hacking on anyway you should look into git-credential-store. Example from the docs;
$ git config credential.helper store
$ git push http://example.com/repo.git
Username: <type your username>
Password: <type your password>
[several days later]
$ git push http://example.com/repo.git
[your credentials are used automatically]
The credentials will be stored locally, unencrypted in ~/.git-credentials. If this is an acceptable trade-off then it'll work for you. Just ensure you understand the user context that the cron job is running under.

Is there a way to auto update my github repo?

I have a website running on cloud server. Can I link the related files to my github repository. So whenever I make any changes to my website, it get auto updated in my github repository?
Assuming you have your cloud server running an OS that support bash script, add this file to your repository.
Let's say your files are located in /home/username/server and we name the file below /home/username/server/AUTOUPDATE.
#!/usr/bin/env bash
cd $(dirname ${BASH_SOURCE[0]})
if [[ -n $(git status -s) ]]; then
echo "Changes found. Pushing changes..."
git add -A && git commit -m 'update' && git push
else
echo "No changes found. Skip pushing."
fi
Then, add a scheduled task like crontab to run this script as frequent as you want your github to be updated. It will check if there is any changes first and only commit and push all changes if there is any changes.
This will run every the script every second.
*/60 * * * * /home/username/server/AUTOUPDATE
Don't forget to give this file execute permission with chmod +x /home/username/server/AUTOUPDATE
This will always push the changes with the commit message of "update".

Is it possible to do a git push within a Gitlab-CI without SSH?

We want to know if it's technically possible like in GitHub, to do a git push using https protocol and not ssh and without using directly an username and password in the curl request.
I have seen people that seem to think it is possible, we weren't able to prove it.
Is there any proof or witness out there than can confirm such a feature that allow you to push using a user access token or the gitlab-ci-token within the CI?
I am giving my before_script.sh that can be used within any .gitlab-ci.yml
before_script:
- ./before_script.sh
All you need is to set a protected environment variable called GL_TOKEN or GITLAB_TOKEN within your project.
if [[ -v "GL_TOKEN" || -v "GITLAB_TOKEN" ]]; then
if [[ "${CI_PROJECT_URL}" =~ (([^/]*/){3}) ]]; then
mkdir -p $HOME/.config/git
echo "${BASH_REMATCH[1]/:\/\//://gitlab-ci-token:${GL_TOKEN:-$GITLAB_TOKEN}#}" > $HOME/.config/git/credentials
git config --global credential.helper store
fi
fi
It doesn't require to change the default git strategy and it will work fine with non protected branch using the default gitlab-ci-token.
On a protected branch, you can use the git push command as usual.
We stopped using SSH keys, Vít Kotačka answers helped us understand why it was failing before.
I was not able to push back via https from a Docker executor when I did changes in the repository which was cloned by gitlab-runner. Therefore, I use following workaround:
Clone a repository to some temporary location via https with a user access token.
Do some Git work (like merging, or tagging).
Push changes back.
I have a job in the .gitlab-ci.yml:
tagMaster:
stage: finalize
script: ./tag_master.sh
only:
- master
except:
- tags
and then I have a shell script tag_master.sh with Git commands:
#!/usr/bin/env bash
OPC_VERSION=`gradle -q opcVersion`
CI_PIPELINE_ID=${CI_PIPELINE_ID:-00000}
mkdir /tmp/git-tag
cd /tmp/git-tag
git clone https://deployer-token:$DEPLOYER_TOKEN#my.company.com/my-user/my-repo.git
cd my-repo
git config user.email deployer#my.company.com
git config user.name 'Deployer'
git checkout master
git pull
git tag -a -m "[GitLab Runner] Tag ${OPC_VERSION}-${CI_PIPELINE_ID}" ${OPC_VERSION}-${CI_PIPELINE_ID}
git push --tags
This works well.
Just FYI, personal access tokens have account level access, which is generally too broad. The Deploy Key is better as it only has project-level access and can be given write permissions at the time of creation. You can provide the public SSH key as the deploy key and the private key can come from a CI/CD variable.
Here's basically the job I use for tagging:
release_tagging:
stage: release
image: ubuntu
before_script:
- mkdir -p ~/.ssh
# Settings > Repository > Deploy Keys > "DEPLOY_KEY_PUBLIC" is the public key of the utitlized SSH pair
# Settings > CI/CD > Variables > "DEPLOY_KEY_PRIVATE" is the private key of the utitlized SSH pair, type is 'File' and ends with empty line
- mv "$DEPLOY_KEY_PRIVATE" ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- 'which ssh-agent || (apt-get update -y && apt-get install openssh-client git -y) > /dev/null 2>&1'
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa > /dev/null 2>&1
- (ssh-keyscan -H $CI_SERVER_HOST >> ~/.ssh/known_hosts) > /dev/null 2>&1
script:
# .gitconfig
- touch ~/.gitconfig
- git config --global user.name $GITLAB_USER_NAME
- git config --global user.email $GITLAB_USER_EMAIL
# fresh clone
- mkdir ~/source && cd $_
- git clone git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
- cd $CI_PROJECT_NAME
# Version tag
- git tag -a "v$(cat version)" -m "version $(cat version)"
- git push --tags

Resources