I would like run deploy script with gitlab-ci, but step ssh-add $SSH_PRIVATE_KEY return an error :
echo "$SSH_PRIVATE_KEY" | ssh-add -
Error loading key "(stdin)": invalid format
You can see my .gitlab-ci.yml :
deploy:
image: node:9.11.1-alpine
stage: deploy
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apk add --update openssh )'
# Add bash
- apk add --update bash
# Add git
- apk add --update git
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- echo "$SSH_PRIVATE_KEY"
- echo "$SSH_PRIVATE_KEY" | ssh-add -
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
# In order to properly check the server's host key, assuming you created the
# SSH_SERVER_HOSTKEYS variable previously, uncomment the following two lines
# instead.
# - mkdir -p ~/.ssh
# - '[[ -f /.dockerenv ]] && echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
script:
- npm i -g pm2
- pm2 deploy ecosystem.config.js production
# only:
# - master
On my project setting, i've been add SSH_PRIVATE_KEY variable, with the id_rsa from my production server cat ~/.ssh/id_rsa.pub.
Anyone can help me ?
In my case, it was because I had made my SSH_PRIVATE_KEY variable protected. When I disabled the Protected state, it worked without any error.
In my case I had to put a new line at the end of the SSH_PRIVATE_KEY variable
I made a stupid mistake and added the key without -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY----- clauses.
Summing up, you should add:
-----BEGIN RSA PRIVATE KEY-----
<< the key itself goes here >>
-----END RSA PRIVATE KEY-----
Also, ensure the newline after the closing is present.
for all people reaching this post not finding a solution yet.
Try to make the branch protected, because its a must for protected variables.
Protected: Only exposed to protected branches or protected tags.
Add a CI/CD variable to a project
It works with variable expansion (curly brackets in double string quotation):
- echo "${SSH_PRIVATE_KEY}" | ssh-add -
While keeping the SSH_PRIVATE_KEY variable protected!
This approach is simply a less ambiguous method for printing variables; in this case it prevents trimming of the last line break.
Make sure that the newline after the end of the file variable is present. If not, the following error would have appeared:
Load key "/home/.../....tmp/ID_RSA": invalid format
[MASKED]#...: Permission denied (publickey).
The ID_RSA was my file variable in this example.
It is the SSH public key in ~/.ssh/id_rsa.pub by default.
The private key is contained in ~/.ssh/id_rsa
If you export key from PuTTYgen, to get key content use its command Conversations - Export OpenSSH key (force new file format)
And trim last spaces and add new line.
You must copy the entire contents of the file(id_rsa), including the final blank line. I solve the problem this way.
I got it working with a protected variable.
If the variable is file, echo won't work anymore:
cat "$SSH_PRIVATE_KEY" | ssh-add -
Otherwise; if variable is NOT file, use the following:
echo "$SSH_PRIVATE_KEY" | ssh-add -
I had this issue on gitlab and bitbucket, both were solved adding a \n by the end of the key file.
echo $'' >> ~/.ssh/id_rsa
In my case, it was because I had made my SSH_PRIVATE_KEY variable available in a specific enviroment. I changed it to the one I was using (or you can change it to All, depending on your setup).
it possible you didn't copy the content of the public key to the authorized_keys
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
I had the same problem and after spending some hours trying to understand what was wrong I found that my private key was encrypted (and my computer had the password in cache for so long that I had forgotten that it was encrypted). It's not so easy to understand if it's encrypted or not by just looking at the key.
You should decrypt the key (set an empty password) and then paste it on a GitLab variable. Then in your .gitlab-ci.yml you can have a similar configuration:
before_script:
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- mkdir -p ~/.ssh
- touch ~/.ssh/id_rsa
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- echo -e "Host *\nStrictHostKeyChecking no\n" > ~/.ssh/config
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
*** Note that if you don't want to write the key in a file, you can just put it inside the ssh agent with:
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
*** Note 2: In the Gitlab panel, make sure you have created a variable (and not a file); normally, it should be protected if you want to make it visible in the main branch.
*** Important: For security reasons change the following line:
- echo -e "Host *\nStrictHostKeyChecking no\n" > ~/.ssh/config
putting only your host/s (and don't permit all connections like this).
If you put:
StrictHostKeyChecking no
when connecting to any host, the ssh-agent will not check the signature and this can be a big vulnerability!
In my case, the stupid me was using inconsistent variable name.
I defined SSH_PRIVATE_KEY in GitLab's variables and was using OWNER_PRIVATE_KEY in .gitlab-ci.yml.
That's why I hate working straight after lunch..
What worked for me was to put '\n' on every line break and storing the key as ONE LINE in my variables and then using '-e' switch in echo:
echo -e $SECRET_KEY > key.pem
This worked and it also helped me to add the identity to ssh-add directly like this:
echo -e "$SSH_PRIVATE_KEY" | ssh-add -
hope this helps someone.
Use
SSH_PRIVATE_KEY: |
-----BEGIN OPENSSH PRIVATE KEY-----
instead of
SSH_PRIVATE_KEY: >
-----BEGIN OPENSSH PRIVATE KEY-----
'|' would save the line break '\n'
Related
Why is this file variable not working? I configured id_rsa as protected file variable under CI/CD variables. Here is my GitLab pipeline:
my_job:
script:
- ssh -o StrictHostKeyChecking=no -i $id_rsa my_host
What else is needed here?
I am getting this error:
Warning: Identity file -----BEGIN not accessible: No such file or directory.
ssh: Could not resolve hostname openssh: Name or service not known
Thank you
I tried to run this command with having id_rsa file in my file repository and it was working fine. Only when I hide the keys file to file variables it stops working.
-i $id_rsa is supposed to be the path to a private key, not the actual private key content itself.
In your case, $id_rsa represents the private key content.
Follow instead "Using SSH keys with GitLab CI/CD" and see if you can add your key to an ssh-agent, instead of using -i.
Good job guys. Adding double quotes helped here - "$id_rsa". Plus, before running ssh I changed the file mode as well. Final solution is:
my_job:
script:
chmod 700 "$id_rsa"
ssh -o StrictHostKeyChecking=no -i "$id_rsa" my_host
Warning: Identity file -----BEGIN not accessible: No such file or directory.
indicates that ssh received a wrong Identity file.
Seems "-----BEGIN ..." is set to $id_rsa. Maybe bash syntax error exists in you scripts.
Put echo "$id_rsa" above ssh command to check $id_rsa is right or not.
I feel what is the need of using before_script in a job. It can be put together inside the script itself
deploy-to-stage:
stage: deploy
before_script:
- "which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
script:
- *** some code here ***
If they are going to run one after another
I can understand having before_script common for all jobs, because it saves some boilerplate
Effectively, machine-wise the before_script content and the script content are concatenated and executed together in a single shell, but the jobs aren't (only) read by machines.
Let's put as example your current Job, and let's suppose that I have to maintain it in a future.
If I have to change something related to the way the new code is generated or how an image has to be deployed, I can just go to the script section because the Job is properly defined and I don't have anything to do related to the Git configuration. On the other hand, if you have everything on that aforementioned section, then I'll have to go through all the code when probably the part I'm interested on is at the end (but it could be the case that it isn't)
Of course, this only applies when the separation between before_script and script is properly set and not a random split without any consideration.
In ~/.ssh I have github and bitbucket private key files. Both are encrypted, so when I ssh-add ~/.ssh/github I have to enter a password.
I have a bash script to automate git commands. If the github and/or bitbucket identities have NOT been added yet, then I want to ssh-add them.
I'm looking for a function like:
has_identity_been_added ~/.ssh/github
To simply check if the private, encrypted key file has been added.
I found:
ssh-add -l prints out a string of text for each identity... and I don't know what it is, but it's not the key file name
ssh-add -L prints the public key, which I'm not storing on my local machine, so I'm not sure how to verify against it, without asking for the private key file's password again.
Both of those print the name I gave to the key file like reed#laptop-x1834 (I think that was the automatic name, cause I didn't specify -C in the ssh-keygen, if memory serves).
I'm not sure where to go from here. I don't want to rely upon the ssh-keygen -C "whatever_name".
ssh-add -l print out fingerprint of the keys added.
You can get the fingerprint of a public key with :
ssh-keygen -l -f id_rsa.pub
I can add pem files to my SSH agent very easily using ssh-add, like so:
$ ssh-add /home/jsmith/keys/mytest.pem
But I can't seem to remove them:
$ ssh-add -d /home/jsmith/keys/mytest.pem
Bad key file /home/jsmith/keys/mytest.pem: No such file or directory
The pem file still exists though... I haven't moved or changed it in any way. Why am I having so much trouble removing this pem file from my SSH agent that I just added a moment ago? What's the correct way to do this?
I want to avoid using ssh-add -D (with a capital "D") because that would delete all of the identities from my SSH agent, and I only want to delete the one I've specified.
You have to use the public key for this. So first extract the public key and then remove it from the agent.
ssh-keygen -y -f /home/jsmith/keys/mytest.pem > /home/jsmith/keys/mytest.pub
ssh-add -d /home/jsmith/keys/mytest.pub
The man page mentions the "public" key as well: "if no public key is found at a given path, ssh-add will append .pub and retry".
The best alternative I've found is to re-add the same file but with a life-time of 1 second:
ssh-add -t 1 myfile.pem
It is easier to remember than extracting the public key.
If you know the comment associated with the key you can simply get the public key from the agent and pipe it back in to delete it.
ssh-add -L | grep -F 'test#example.com' | ssh-add -d -
I am working on a continuous integration with Travis CI.
This is my configuration:
before_install:
- echo -e "Host *\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
- echo -e $id_rsa.pub > ~/.ssh/id_rsa.pub
- echo -e $id_rsa > ~/.ssh/id_rsa
- sudo chmod 600 ~/.ssh/*
- sudo chmod 644 ~/.ssh/config
- eval `ssh-agent -s`
- ssh-add ~/.ssh/id_rsa
...
$ ssh-add ~/.ssh/id_rsa
Enter passphrase for /home/travis/.ssh/id_rsa:
On the ssh-add step, it ask me the passphrase and it's stop the deployment. I have tested with an other ssh key without passphrase but it don't fix my issue.
I have tested lot of solution like yes $MY_PASSWORD | ssh-add ~/.ssh/id_rsa or echo "$MY_PASSWORD" | ssh-add ~/.ssh/id_rsa but it don't works.
I have added to my .ssh/config (you can see it in my config):
Host *
StrictHostKeyChecking no
isn't it supposed to make it don't ask me the passphrase ?
Maybe someone have an idea ?
Thanks :)
You are using encrypted private key (which is good), but it needs the passphrase (which is bad for scripting). There are several possibilities you can proceed:
Remove the passphrase from the key and use it unencrypted (less secure)
ssh-keygen -p -P "old_passphrase" -N "" -f ~/.ssh/id_rsa
Use sshpass tool to unlock the key (storing the passphrase next to the key in the script basically defeats the security of encrypted key)
sshpass -p passphrase ssh-add ~/.ssh/id_rsa
I had resolved my problem.
I had different problem in basic utilisation of environment variables and echo.
My environment variables names were not good. "$id_rsa.pub" in travis was interpreted by $id_rsa . ".pub" so it added some wrong characters to my content. I renamed it to id_rsa_pub.
I forget to transform " " in "\ " and newlines by "\n" and with travis and his environment variables, you must write "\\n" instead of just "\n".
My issue was in part because bad ssh files, and because I use a rsa key with password. In my case it's not important to have a password so i deleted it.
For that i use the answer of jakuje. My ssh key is now installed correctly in each builds.
Thank you for your help !