Use Gitlab API to create variables on project - gitlab

I am trying to create a GitLab variable for a project on GitLab using a cURL command.
I am following the API and the relevant paragraph here.
Looking at the example on the docs, I cannot find any evidence on how Gitlab knows which repo I am trying to add the variable to.
I tried adding a private token and runnnig the command and I ended up with a 403.
Could someone please explain how to use the GitLab API to add variables to a repo through a cURL command?

Looking at the example on the docs, I cannot find any evidence on how Gitlab knows which repo I am trying to add the variable to
But... the all idea behind "instance-level" variables is to be "global", not tied to a specific project.
See issue 14108:
Global (instance-level) variables for CI builds
Less of an issue or more or a feature/question perhaps ... It would nice to support global variables in the CI builds; classic use-case being deployment keys / docker registry credentials.
Problem to solve
Without support for global variables users must manually enter the same credentials repeatedly for dozens of projects when migrating to GitLab for details like:
docker private registry credentials
kubernetes credentials)
deployment keys
Proposal
Implement support for global (instance-level) variables for CI builds by:
Re-use the refactor/re-design of CI variables in project/group settings
Place under CI/CD section in the admin settings
A better API for your case would be:
Create variable (project
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \
"https://gitlab.example.com/api/v4/projects/1/variables" \
--form "key=NEW_VARIABLE" --form "value=new value"

Related

Lint GitLab pipeline templates for syntax issues

I have a project filled with various pipeline templates that other projects can use for their work, keeping my CI DRY. These are not in a .gitlab-ci.yml file, but separate ./templates/language.yml files.
I'm already using yaml lint to make sure it is valid yaml, but I want to make sure I'm using valid GitLab CI syntax also.
I'd like to lint my CI templates when I'm merging. I have rejected running gitlab-runner exec shell because I can't figure out how to trigger specific copies. It looks like there might be something in the API with this, but I haven't been able to nail down the secret sauce.
How are you doing this?
We are using two different approach to achieve this.
via API - https://docs.gitlab.com/ee/api/lint.html
with a fake project setup within my templates
with gitlab-ci-local
via API
The first approach is using the linter from gitlab via API:
curl --header "Content-Type: application/json" --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/ci/lint" --data '{"content": "{ \"image\": \"ruby:2.6\", \"services\": [\"postgres\"], \"before_script\": [\"bundle install\", \"bundle exec rake db:create\"], \"variables\": {\"DB_NAME\": \"postgres\"}, \"types\": [\"test\", \"deploy\", \"notify\"], \"rspec\": { \"script\": \"rake spec\", \"tags\": [\"ruby\", \"postgres\"], \"only\": [\"branches\"]}}"}'
The problem here, is that you can not utilize the JOB_TOKEN to do this, therefore you need to inject a secret and generate a token to achieve this. there is even a linting library available - https://github.com/BuBuaBu/gitlab-ci-lint
fake project
The second approach mimics the setup, with an own .gitlab-ci.yml which includes the templates and executes it. Like normal merge request pipelines. This way we ensure the scripts do not have any failure and are save to use them. We do this for docker images as well for gradle build templates etc.
eg. for docker images we build the image, include the template, and overwrite the image property of the jobs to the temporary docker image.
gitlab-ci-local
The third approach is not as sufficient and depending on the feature, lacks functionality. There is the tool gitlab-ci-local https://github.com/firecow/gitlab-ci-local which can be used to verify gitlab ci builds and execute them. But it is not an official implementation and not all features are present. In the end you also need again some kind of project setup.
If i can choose i would go with the first approach. In our case it has proven to be useful. The initial effort of faking a project is not that much, for the benefit of a long term save solution.

Gitlab api to access private projects

I am trying to retrieve the file and folders from a gitlab project. I used the following command:
curl --header "PRIVATE-TOKEN: ${MY_TOKEN}" https://gitlab.com/api/v4/projects/${PROJECT_ID}
The problem is, my project is private. Even though I am a member of the project, I cant use my access token to retrieve the project information. But if I take out the id from the command, and search for all the projects, I get a list of all the public projects. Is there something I have to do differently to access the private projects?
Thanks.

Is injecting environment variables into CI/CD scripts really best security practice?

TL;DR
I'm trying to understand how to setup a Jenkins or Gitlab installation "properly" security-wise.
We'd like jobs to be able to pull from and push to maven repositories and docker registries, but do this in a safe way, so that even malicious Jenkinsfile or .gitlab-ci.yml files can't get direct access to the credentials to either print them on-screen or send them with e.g. curl somewhere.
It seems the straight-forward and documented way to do it for both Jenkins and gitlab is to create "Credentials" in Jenkins and "Variables" in Gitlab CI/CD. These are then made available as environment variables for Jenkinsfile or .gitlab-ci.yml to access and use in their scripts. Which is super-handy!
BUT!
That means that anybody that can create a job in Jenkins/Gitlab or has write access to any repository that has an existing job can get access the raw credentials if they're malicious. Is that really the best one can hope for? That we trust every single person that has login to a Jenkins/Gitlab installation with keys to the kingdom?
Sure we can limit credentials so they're only accessible to certain jobs, but all jobs need access to maven repos and docker registries...
In these post-SolarWinds times, surely we can and must do better than that when securing our build pipeline...
Details
I was hoping for something like the ability for a e.g. a Jenkins file to declare up-front that it wants to use these X docker images and these Y java maven dependencies somewhere before a script runs, so these dependencies are downloaded. So that credentials to pull dependencies are hidden from the scripts. And that after a build, a number of artifacts are declared, so that after the script has concluded, "hidden" credentials are used to pushed the artifacts to e.g. a nexus repository and/or docker registry.
But the Jenkins documentation entry for Using Docker with Pipeline describes how to use a registry with:
docker.withRegistry('https://registry.example.com', 'credentials-id') {
bla bla bla
}
And that looks all safe and good, but if I put this in the body:
sh 'cat $DOCKER_CONFIG/config.json | base64'
then it is game over. I have direct access to the credentials. (The primitive security of string matching for credentials in script output is easily defeated with base64.)
Gitlab doesn't even try to hide that it is easy in their docs
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
Could be replaced with
before_script:
- "echo $CI_REGISTRY_USER:$CI_REGISTRY_PASSWORD | base64"
Likewise, game over.
Is there no general to have credentials that are safely protected from the Jenkinsfile or .gitlab-ci.yml scripts?
These two articles describe the situation perfectly:
Accessing and dumping Jenkins credentials | Codurance "The answers you seek, Jenkins shall leak."
Storing Secrets In Jenkins - Security Blogs - this last article even describes how to print Jenkins' own encrypted /var/lib/jenkins/credentials.xml and then use Jenkins itself to decrypt them. Saves the hacker the trouble.

Fail to update gitlab CICD variable using gitlab API

I am exploring Gitlab APIs to work with CICD. For updating gitlab CICD variable, I use following API
PUT /projects/:id/variables/:key
here is the document regarding it. This API works perfect if the variable is unique for CICD. But If I use duplicate name variable with different environment_scope, it some time update my variable but most time fail to update. It throws following error
key: [MY_KEY] has already been taken
I checked in gitlab issues here but didn't found any proper solution regarding it.
For reference, my gitlab version is GitLab Community Edition 12.8.1
Can anybody help me regarding this?.. Thanks in advance.
From the documentation found here, It appers you need to amend the follow to the end of the line:
--form "value=updated value"
So ultimately you will have:
PUT /projects/:id/variables/:key --form "value=updated value"

How change the fixed CI/CD variables from repo settings by request?

I'd like to know if is it possible change the CI/CD variables assigned in repo settings by request. I know previously i can pass env variables by curl request to trigger the pipeline.
But now i have another situation which an automatic build and a docker image is pushed to a private registry happen when the master or dev branch suffers a merge. To do this, i use the CI/CD variables already setted.
And oftenly i will create a new registry, which it is done automatically in a bash script. After this what i'd like is setup the new keys accesses to the gitlab repo via request or other automatic way.
If someone could help me with some idea, thanks in advance.
If I am not mistaken this it what you are looking for:
Project-level Variables API
https://docs.gitlab.com/ee/api/project_level_variables.html

Resources