In my project's settings I see this:
Public deploy keys available to any project (15)
Rewind
CFMM Ansible Deployment
LRM Puppet Test
gitlab-runner (lion)
deploy#jasmine
deployer#stridsberg.nu
test-server
gitlab-runner
kijkmijnhuis#SensioLabsInsight
And many more... what are these things for? I know that if I enable one, that key then could clone my repo... but why are these things showing to me? Is there any benefit?
See "Deploy Keys":
Deploy keys allow read-only or read-write (if enabled) access to one or multiple projects with a single SSH key pair.
This is really useful for cloning repositories to your Continuous Integration (CI) server. By using deploy keys, you don't have to setup a dummy user account.
I use them with Jenkins: easy to setup, easy to revoke if needed.
And I use a read-write deploy key for a maven release task to be able to push back to any repo where that key is deployed.
Related
We have a Gitlab Project with multiple developers, and the repo itself is a conan Project.
When creating a release tag, I want to setup a pipeline which creates the conan package and then uploads it to artifactory. Uploading to the Artifactory requires a username and password login. This is similar to many other deployment jobs where a user+pass authentication is required.
I already found a solution to define secret variables for the project (project level) and use a single account for the whole project to upload to artifactory. This is security-wise an issue, as we want to know who uploaded the conan package, i.e., which user.
Is it somehow in Gitlab possible to define secrets on the user level?
I.e., if User1 creates the tag and has his own Artifactory Account User+Pass secrets set up, the pipeline successfuly pushes the conan package.
If now User2 creates a tag but did not setup secrets, the push should fail.
The following Gitlab issue is a similar description of the problem, but does not contain any solution:
https://gitlab.com/gitlab-org/gitlab/-/issues/15815
Also related: gitlab credentials for specific user (but handles a shared secret with specific user access).
CI secrets are currently only project level, but you might be able to do something similar with one of the predefined env variables. There are four variables that hold info about the Gitlab user who started the pipeline (either by trigger, schedule, push, etc). $GITLAB_USER_EMAIL, $GITLAB_USER_ID, $GITLAB_USER_LOGIN, $GITLAB_USER_NAME. Then, in your projects secret variables, you can store credentials for each of your users, and in your job grab the correct one based on the USER variables.
Let's say I want to make a variable with the value from Vault.
variables:
$SSH_PRIVATE_KEY: `vault kv get -field=private_key project/production`
before_script:
- echo "$SSH_PRIVATE_KEY"
Is it possible?
Is there another way to use Vault secrets inside pipelines?
Original answer Jul 2019:
You can see it used in before/after script steps, with a revoked token at the end.
See gitlab.eng.cleardata.com pub/pipelines/gcp-ci.yml as an example:
# Obtains credentials via vault (the gitlab-runner authenticates to vault using its AWS credentials)
# Configures the `gcloud` sdk and `kubectl` to authenticate to our *production* cluster
#
# Note: Do not override the before_script or the after_script in your job
#
.auth-prod: &auth-prod
image: cleardata/bionic
before_script:
- |
export CLUSTER_NAME=production
export CLUSTER_LOCATION=us-central1
export CLUSTER_PROJECT_ID=cleardata-production-cluster
- vault login -method=aws -path=gitlab-ci -no-print header_value=gitlab.eng.cleardata.com
- GCP_CREDS=$(vault read -field=private_key_data gitlab-ci/gcp/cleardata-production-cluster/key/deployment-key)
- gcloud auth activate-service-account --key-file=<(base64 -d <<<$GCP_CREDS)
- gcloud auth configure-docker
- gcloud beta container clusters get-credentials $CLUSTER_NAME --region $CLUSTER_LOCATION --project $CLUSTER_PROJECT_ID
after_script:
- vault token revoke -self
Update March 2020: This is supported with GitLab 12.9
HashiCorp Vault GitLab CI/CD Managed Application
GitLab wants to make it easy for users to have modern secrets management. We are now offering users the ability to install Vault within a Kubernetes cluster as part of the GitLab CI managed application process.
This will support the secure management of keys, tokens, and other secrets at the project level in a Helm chart installation.
See documentation and issue.
April 2020: GitLab 12.10:
Retrieve CI/CD secrets from HashiCorp Vault
In this release, GitLab adds support for lightweight JSON Web Token (JWT) authentication to integrate with your existing HashiCorp Vault.
Now, you can seamlessly provide secrets to CI/CD jobs by taking advantage of HashiCorp’s JWT authentication method rather than manually having to provide secrets as a variable in GitLab.
See documentation and issue.
See GitLab 13.4 (September 2020)
For Premium/Silver only:
Use HashiCorp Vault secrets in CI jobs
In GitLab 12.10, GitLab introduced functionality for GitLab Runner to fetch and inject secrets into CI jobs. GitLab is now expanding the JWT Vault Authentication method by building a new secrets syntax in the .gitlab-ci.yml file. This makes it easier for you to configure and use HashiCorp Vault with GitLab.
https://about.gitlab.com/images/13_4/vault_ci.png -- Use HashiCorp Vault secrets in CI jobs
See Documentation and Issue.
See GitLab 13.9 (February 2021)
Vault JWT (JSON Web Token) supports GitLab environments.
To simplify integrations with HashiCorp Vault, we’ve shipped
Vault JWT token support. From the launch, you could restrict access based on
data in the JWT. This release gives you a new dimension for restricting
access to credentials: the environment a job targets.
This release extends the existing Vault JWT token to support environment-based
restrictions too. As the environment name could be supplied by the user running
the pipeline, we recommend you use the new environment-based restrictions with the
already-existing ref_type values for maximum security.
See Documentation and Issue.
We have a helper script baked into our builder images that can convert GitLab CI/CD job variables pointing to secrets into job env vars containing Vault secrets. In our case, we're also using the appRole auth method to limit the validity of the temporary Vault access token.
An example use case would be:
I want a job env var "MY_SECRET_TOKEN" with a value from a Vault secret.
So I add a CI/CD variable called V_MY_SECRET_TOKEN="secret/<path>/<key>"
Then I insert a job step to retrieve the secret value and populate
the MY_SECRET_TOKEN with the value associated with the key.
Variables added to the CICD job setup in GitLab.
VAULT_ADDR=https://vault.example.com:8200
VAULT_ROLE_ID=db02de05-fa39-4855-059b-67221c5c2f63
VAULT_SECRET_ID=6a174c20-f6de-a53c-74d2-6018fcceff64
VAULT_VAR_FILE=/var/tmp/vault-vars.sh
Steps added to the .gitlab-ci.yml job definition.
script:
- get-vault-secrets-by-approle > ${VAULT_VAR_FILE}
- source ${VAULT_VAR_FILE} && rm ${VAULT_VAR_FILE}
Here is a reference to the get-vault-secrets-by-approle helper script we use. Here is a writeup of the thinking behind the design.
The 'before_script' option didn't fit our workflows as we define a combination of privileged and non-privledged stages in our gitlab-ci.yml definition. The non-privileged jobs build and QA code while the privileged jobs package and release code. The VAULT_ROLE_ID and VAULT_SECRET_ID job variables should only be visible to the privileged package and release jobs.
I also experimented with using include's, extend's, and yaml anchors but I wanted to merge items into existing yaml maps (script: {} or before_script: {}) as opposed to replacing all the items in a map with the template.
I have a problem on my build/release pipeline with Azure Container Reigstry.
I use a Azure Resource Group Deployment task to deploy Azure Container Registry (and other stuff) and it works perfectly.
I have the loginServer, username and password in output variables to reuse it.
Then I want to build and push image to ACR but I can't set the name of the registry (that I get from output variable) with a variable. I have to choose the registry when I setup the definition, but it is not created at this moment.
Is there a way to do this ?
As a workaround, I use the Azure Resource Group Deployment the create the registry and then I send output variables to a powershell script which build, tag and push my images to the registry.
If nobody has a better way, I think I will post a uservoice to change that.
When you say you use an Azure Resource Group Deployment task, are you referring to VSTS?
If you could provide more specific repro steps, I might be more helpful.
I'd also suggest you might take a look at https://aka.ms/acr/build as easy way to natively docker build images with your registry. ACR Build is now available in all regions and simplifies may of the experiences you may be hitting.
Daniel just made this post that helps with the VSTS integration: https://www.danielstechblog.io/building-arm-based-container-images-with-vsts-and-azure-container-registry-build/
Steve
Sorry for the delay, I was off the office.
I just retry to fix my problem and it seems that I can now enter a free text (and so, a release variable) to the VSTS docker task to specify the ACR I just created before with a Azure Resource Group Deployment task.
So no problem anymore.
Thank you for your response, I will take a look to ACR build :)
Bastien
I'm using CircleCI for the first time and having trouble publishing to Azure.
The docs don't have an example for Azure, they have an example for AWS and a note for Azure saying "To deploy to Azure, use a similar job to the above example that uses an appropriate command."
If anybody has an example YAML file that would be great, if not a nudge in the right direction would be handy. So far I think I've worked out the following.
I need a config that will install the Azure CLI
I need to put my Azure deployment credentials in an environment variable and
I need to run a deploy command in the YAML file to zip up all the right files and deploy to my Azure app service.
I have no idea if the above is correct, or how to do it, but that's my understanding right now.
I've also posted this on the CircleCi forum.
EDIT: Just to add a little more info, the AWS version of the config file used the following command:
- run:
name: Deploy to S3
command: aws s3 sync jekyll/_site/docs s3://circle-production-static-site/docs/ --delete
So I guess I'm looking for the Azure equivalent.
The easiest way is that on the azure management console you setup as deployment from source control and you can follow this two links
https://medium.com/#strid/automatic-deploy-to-azure-web-app-with-circle-ci-v2-0-1e4bda0626e5
https://www.bradleyportnoy.com/how-to-set-up-continuous-deployment-to-azure-from-circle-ci/
if you want to do the copy of the files from ci to the iis server or azure you will need ssh access the keys etc.. and In the Dependencies section of circle.yml you can have a line such as this:
deployment:
production:
branch: master
commands:
- scp -r circle-pushing/* username#my-server:/path-to-put-files-on-server/
“circle-pushing” is your repo name, which is whatever it’s called in GitHub or Bitbucket, and the rest is the hostname and filepath of the server you want to upload files to.
and probably this could help you understand it better
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp
I am attempting to deploy to an Azure app service from Gitlab but ran into a problem. The deployment fails immediatley with:
Host key verification failed.\r\nfatal: Could not read from remote
repository.\n\nPlease make sure you have the correct access
rights\nand the repository exists.\n\r\nD:\Program Files
(x86)\Git\cmd\git.exe fetch origin --progress
I've deleted the deployment configuration in the App service a few times and recreted to make sure the SSL URL for the Gitlab repository is correct. I've also tried addkng my key into the Gitlab deployment keys but it wont let me as its already there, so I knw the key is definitely correct.
Searching around on the web suggests to remove the host from the known hosts file, but as this is on azure there is no known_hosts in the /ssh folder (Kudu->Console->D:\home\.ssh) so I'm not sure what else to try.
Thanks