Hide Gitlab Access Tokens Used in Scheduled Jobs - gitlab

In my Gitlab Repo, I have to run a scheduled JOB which triggers a Pipeline. And this pipeline deletes the old JOB Logs using Gitlab API.
But this API calls needs the Gitlab AccessToken to perform the operation. Initially I though of using CI_JOB_TOKEN variable, which is auto-generated token, but it has no access to Gitlab APIs.
Alternatively I can store Project AccessToken as a Variable in my Schedule Job. But it will be visible to other people also in Project with Maintainer or Owners roles.
Is there any other way, where either I can store my tokens without reveling it to others? Or some mechanism where I can make it run without passing my Project AccessTokens?

Your best bet would be to store the secret in a vault/cloud service, such as HashiCorp Vault, AWS Secrets Manager, Azure Vault, etc. GitLab has the CI_JOB_JWT_V2 token, which can be used to authenticate to cloud services. With this method, you do not need to store any secrets in GitLab at all.
You can also see the Vault integration as another option.
The only other option might be to use a runner that has the secret on the system and lock that runner to your project.

Related

Access environment variables stored in Google Secret Manager from Bitbucket pipelines

I am using bitbucket pipeline to run test cases. In order for test cases to succeed, I need secrets which is stored in google secret manager. Is there any way I can access those secrets within bitbucket pipeline environment ?
There are a couple of options.
In case if these secrets are static, the easiest solution would be adding them to your Repository or Deployment variables. Make sure that they're marked as Secured, so that they will be masked, i.e hidden, in the logs.
Alternatively, if your secrets are rotated and must be fetched from the secrets manager on every build in order to stay up-to-date, you'll need to use corresponding CLI commands in the build script. In order for this to work you will have to give Bitbucket Pipelines access to the secrets in your cloud. For details, check out, for example, this page.

Do gitlab runners need to be re-registered after a migration if the external url does not change?

I am migrating primary and secondary GitLab nodes to new nodes. In order to do this, I am following the backup and restore documentation.
Do GitLab runners need to be re-registered after a migration if the external url does not change?
Thanks all!!
No, GitLab runners do not need to be re-registered following a migration / backup/restore.
The runner registration is stored in GitLab's database and is associated with the token the runner receives from the gitlab-runner register command. Those tokens will continue to be valid, so long as they were properly backed up, probably even if the GitLab URL changes.
You can use the Runners API to verify a runner token.

Obtain shared runners token Gitlab API

My goal is to automatically register a shared Gitlab runner on our hosted Gitlab. To do this, I need to obtain the runners token via the Gitlab API.
Unfortunately, I haven't found a point in the API to fetch the shared runners token. On the website, the token is shown in Admin area / Overview / Runners / Set up a shared Runner manually.
As far as I know, Gitlab has 3 different types of runners token:
Specific (assigned to projects)
Group (assigned to a group)
Shared (for unassigned projects)
I am able to access the runners_token in the project details and the group details but I haven't found a place to obtain the shared runners_token.
I am thankful for every help!
Without an API endpoint that supports this, here's an alternative solution. The command has to be run on the server hosting your Gitlab instance. The line below will output the current shared runner registration token.
sudo gitlab-rails runner -e production "puts Gitlab::CurrentSettings.current_application_settings.runners_registration_token"

Is it safe to use GitLab CI «protected» variables for secrets?

I haven't found any way to pass secret variables in GitLab CI pipelines except with so-called «protected» variables. Any other variables can be revealed by any committer as every commit/branch goes throw a pipeline and the code can be modified.
I don't like protected variables because they are too complicated. I need to grant access to some variable to certain people like I do in SQL-databases or Linux filesystems. Instead, I have to make a protected variable, a protected branch, a protected environment (premium feature). And I have to add the maintainer permission level to some users. And then (maybe) they will the only people to access my secret variables.
Also, I have no idea how are those variables stored. Usually, I use Hashicorp Vault and now GitLab is the weakest security point.
Is it safe enough?
Are there more reliable methods to keep secrets in CI pipelines?
issue 13784 refers to an encryption at REST, so the security is not... optimal
There is an epic opened to improve that, and you can setup an Vault integration, but there is not one by default.
Issue 61053 is about solving that: "Vault integration for key/value secrets MVC"
More and more teams are starting to store their secrets in Vault.
We should provide a secure way to fetch short-lived tokens from Vault that can be used at runtime by a job in a CI/CD pipeline.
This is for GitLab 12.3, Sept. 2019.
Just to add to the answer of #VonC, here is the general vision expressed by GitLab with regards to Secrets Management and various scenarios of integrating with Vault, including fully embedding it inside: https://about.gitlab.com/direction/release/secrets_management/

Good strategy to share secrets between client-server builds

Take the following case as example.
You've a RESTful API layer secured using OAuth2. In the other hand, to let users authenticate against your APIs, you need to request an access token (i.e. grant_type=password).
In order to request a password access token, client app requires an OAuth Client (key+secret pair).
Now you've configured everything to use continuous integration and continuous deployment.
During a development build, the build script creates test data, including OAuth clients. Obviously, if a build creates test data, it previously drops all data created during automated tests.
So you'll want your client app to use one of OAuth clients and you want to avoid hardcoding one of them, because they're created using the API infrastructure, so they're re-created from scratch on each build.
Think that front-end and back-end are built by different build scripts.
Conclusion & question
What would be a good approach to share secrets between the server and client infrastructure, so both get up and running synchronized with the same security secrets?
Some ideas
Operating system environment variables. I could store those secrets on build machine environment variables. That is, client infrastructure will be always built and deployed with most up-to-date secrets.
Same as #1, but storing those secrets on a shared directory in the build machine.
Regarding TFS/VSTS build (TFS 2015 or later)/release (TFS 2017 or VSTS) system, you just need to check Allow Scripts to Access OAuth Token option in Options/General tab of build definition or release environment, then you can fetch Access OAuth Token by using $(System.AccessToekn) in each task.
Regarding other system, the better approach is to store the access token in system environment variables and remove it at the end, which is similar to the share variable value for other build/release tasks by using "##vso[task.setvariable variable=testvar;]testvalue" (PowerShell) in TFS or VSTS.
On the other hand, you can store encrypted Access Token in system environment for security, then decrypt and use it.
Finally I've ended up with the common build directory to store a JSON file with latest credentials approach where both builds can access it. Each backend build run persists a JSON file containing the whole credentials, and frontend build relies on the whole file.
Anyway, I tried the environment variables' approach, but since both builds are running on the same TFS build agent, client build couldn't see environment variables' changes unless the whole agent service would be restarted.

Resources