Gitlab: Use Dependency Proxy from outside a group? - gitlab

Is it possible to use Gitlab's Dependency Proxy for a private project not in a group?
It seems wasteful to create a single group per project (as they may not be related) just to cache container images. Besides, what if the project needs to use images from multiple groups (without being associated with multiple groups)?
I am using a docker runner, and the docs describe a DOCKER AUTH_CONFIG variable, but:
Setting it in gitlab-ci.yaml would expose secrets in the repo
Setting it elsewhere means hard coding a username/password, which I'd rather avoid and use pre-existing variables such as CI_REGISTRY_USER and CI_JOB_TOKEN
I also thought about creating a Deploy Token in the group, but the docs also say those are only for projects within the group.

Related

Best practice for maintaing K8S resources in github

I'm working on this cloud project where we have several development repositories in GitHub and in each we have the overlays containing config files that are specific for a local K8S cluster, a dev Azure cluster and a prod Azure cluster.
In order to have different repos for these envs we use a repo with a kustomization file for each service that fetches the overlay of the dev/test/prod and uses it as it's base.
However the issue is managing this resources since we don't want to share the dev repos to possible clients or other end users in order for them to deploy these services into their K8S environment but not giving them permissions will imply that they will not be able to fetch these overlays and bases and deploy them.
What is the best practice in order to have a protected and restrictive dev repos and yet be able to do the deployment operation?
I know this is a abstract question but I've never dealt with organization of repos in a scale like this.
To clarify I am posting Community Wiki answer.
The solution you suggested in comment's section:
We will have the deployments/namespaces/services manifests in the same repo as the application source code as well the an overlay with a customization with the necessary resources to fully deploy in the dev environment.
As for test/ prod environments we created a structure to add an overlay per app with the same resource files but with the env details in the files to be used as configmaps.
And a customization using the dev repository as the base. Unfortunately this will imply that the cluster admin will have access to all repos of an application.

How to share state between different environments via terragrunt?

Suppose I have defined two environments - staging and production via terragrunt. Each uses common module definitions and parametrizes them by environment. This all makes sense until I want to configure gitlab and inject things like ECR repo URLs into gitlab's CI/CD variables. I've attempted to create a common bucket in a root account and given access to it to each of the roles, but there seems to be no way of overriding the remote_state setting via terragrunt.hcl.
What is the right way of doing this?

The basics of a ENV file

I’m using a hosting website to host my discord bot and my .env stores the token. How does it still work when the file is .gitignored? Because I don’t want people stealing my token and using it for other purposes.
Your initial deployment process on your hosting needs to be more complex than "Pull the application from my Git repository".
For simple applications that generally just means you create the .env file on the hosting manually.
For complex systems (e.g. when you have multiple instances of the application on different servers) you'll generate it from a secure data store as part of a process that involves a deployment tool like Terraform.
You use gitignore and add the .env extension in it to make sure that it does not get pushed to the remote repository on github so that no-one can access those variables. in order to add the .env variables on a hosting website, you need to add the environment variables externally on that hosting site. The method depends entirely on the service provider.

Keeping asp.net core config out of your source and your pipelines

I'm working on an asp.net core project and I'm trying to figure out how to keep my source and my pipelines 100% secret free.
I've got a VM running the azure agent and an azure dev ops pipelines for build and release.
If i delete the site on the VM, the release pipeline will auto-magically recreate it for me and deploy the latest build.
Super cool.
Now I read up on best practices for configuring a .Net core app and I found this article: https://www.humankode.com/asp-net-core/asp-net-core-configuration-best-practices-for-keeping-secrets-out-of-source-control
So its a bad idea to keep secrets in code, that makes perfect sense.
But if i apply the same security principals to Yaml, then surely I shouldn't place secrets in my pipelines either.
But I need the pipelines to be able to just recreate the site from scratch and it should just work. Somehow the site needs to know where its default sql connection is, or it needs to have a key to the azure app config service. I shouldn't have to log onto the VM and create an appsettings.json manually after every release!
So whatever the site needs to operate needs to be included in the pipeline, therefore some artifact, or included in the code.
I've googled for days, but I can't seem to find any info on how to fully automate this.
I've considered creating a custom configuration provider that reads from the actual VM registry, but that feels wrong too.
I basically need a config option that is NOT hosted in the site itself. So i set it up once on the VM and never again.
The approach that Lex Li lists in the comments is the Microsoft recommended way of securing "secrets" in pipelines.
Ben Smith's answer in my opinion is just as good, maybe slightly less secure.
I use this approach in our organization. All of our release pipelines do the final configuration transformation with the appropriate settings based on the environment they are being deployed to.
i.e db connections are transformed at the dev, test and UAT and production deployment stages.
I keep the relevant secrets in the pipeline variables as protected secrets. I do this for 2 reasons:
Only a select number of trusted personnel have access to the release pipeline definitions.
Even if someone does have access to those definitions - you cannot see a secured variable. Even you you "undo the padlock" on the variable tab - you cannot see what the setting is.
Our actual secrets are then stored in our enterprise secret vault.
Using the Azure Key Vault is definitely a good approach. However we already have a centralized place to keep our stuff; I don't want it in 3 spots.
I would be remiss to not include Variable Groups as part of the pipeline process. Same concept as the build / release variables - the difference is you can now share them in one spot.
These are all opinions of course. This is just one way of doing this; which I feel is a pretty good balance of security and flexibility.
In addition to the suggestions in your questions comments, you can also store secrets in the pipeline "Variables" section.
In here you can add variables and then mark them as secret by selecting "Keep this value secret". Once you've saved a secret its value is then obfuscated i.e. you can make use of it but you can no long see its original value within Azure Devops (which admittedly can be rather frustrating if you want to revisit the variable to check it!).
You can then reference the secret variable in your pipeline YAML using the syntax:
$(variable-name)
So this approach keeps secrets safe within Azure Devops until they need to be resolved by the pipeline YAML script.

Restrict access to gitlab container registry

I want to be able to let the users push a docker file along with code to gitlab and let the gitlab build the image, that can then be pulled by authenticated user of the project.
The problem is , I want to make sure the users dont push docker images directly to gitlab container registry , so that we can review the docker-files and control , and make sure the Dockefiles are using the Redhat only registry to pull stuff from.
How can we prevent users from pushing thier own built image to gitlab?
In other words , how can we make sure that docker image in the container registry of gitlab project is the one built by gitlab from dockerfile and is not the one pushed by the project users direclty from somewhere else?
deploy tokens is probably the best way forward. You can grant these on a per-repository or a group basis and specify granular access such as, for your use case read_registry as well as an optional expiry date.
Another option is to use personal access tokens. These are set globally for a user and you can specify as many as you like (eg one for each client), set an expiry date, and restrict access to read_registry.
I don't think it's currently possible. If you check gitlab's permissions model, you'll see that the user access levels determine what you can do in the container registry:
read rights are available as Reporter+
update rights are available as Developer+
If your users are developers, then they will be able to push images to the registry. If you want to limit that to gitlab-ci builds, you'd need to use protected branches and limit your users to Reporter access level (probably not what you want).
An alternative a bit convoluted would be to setup a second project that is used as the source for images, and configure its build setup to pull from the first project protected branch. Commits to the protected branch in the first project would always have to be reviewed and docker images would be pulled from the second project.

Resources