Suppose I have defined two environments - staging and production via terragrunt. Each uses common module definitions and parametrizes them by environment. This all makes sense until I want to configure gitlab and inject things like ECR repo URLs into gitlab's CI/CD variables. I've attempted to create a common bucket in a root account and given access to it to each of the roles, but there seems to be no way of overriding the remote_state setting via terragrunt.hcl.
What is the right way of doing this?
Related
As I'm pretty new to Terraform, I'm not sure if Terraform is suitable for the problem I'd like to solve. Here is the scenario...
We are using Keycloak as our Identity & Access Management tool. Currently, we're running dedicated Keycloak instances in different environments:
(locally - on each developer's machine)
dev
preproduction
production
The configuration is done completely via the web UI which is, no surprise, cumbersome and error-prone.
Example
For adding a Keycloak client or adding roles, the workflow is similar to the following:
A developer makes the configuration changes on her local Keycloak instance.
If things are working, the same configuration needs to be applied to the dev instance.
Then on the preproduction instance...
Finally, on the production instance.
I was able to create a basic Terraform main.tf which successfully performs all the configuration on my local machine. But thinking this further, I have some difficulties...
The above workflow is not "cloud-centric", that is, it's not our goal to apply the same Keycloak configuration to different environments, but rather applying different Keycloak configurations depending on their stage to a dedicated environment. For example, a Keycloak role app_admin may exist in the dev stage but not yet in the preprod and prod stage.
The most basic question:
Is Terraform a suitable tool to cover the above workflow?
I am using bitbucket pipeline to run test cases. In order for test cases to succeed, I need secrets which is stored in google secret manager. Is there any way I can access those secrets within bitbucket pipeline environment ?
There are a couple of options.
In case if these secrets are static, the easiest solution would be adding them to your Repository or Deployment variables. Make sure that they're marked as Secured, so that they will be masked, i.e hidden, in the logs.
Alternatively, if your secrets are rotated and must be fetched from the secrets manager on every build in order to stay up-to-date, you'll need to use corresponding CLI commands in the build script. In order for this to work you will have to give Bitbucket Pipelines access to the secrets in your cloud. For details, check out, for example, this page.
I followed this article to learn about staging environments https://learn.microsoft.com/en-us/azure/app-service/deploy-staging-slots. What are the code changes required in order to set up staging environments in Azure App Service?
What makes you think code changes are required? A staging slot is in its essence it is just a deployment target.
That said, the one thing I can think of is making sure your application does not contain hardcoded configuration that is different between test and prod environments.
If the ARM template is well parameterized it won't need any changes too. The build pipeline should be able to deploy to multiple environments (dev/test, prod etc) by supplying different parameters to the template.
I'm working on this cloud project where we have several development repositories in GitHub and in each we have the overlays containing config files that are specific for a local K8S cluster, a dev Azure cluster and a prod Azure cluster.
In order to have different repos for these envs we use a repo with a kustomization file for each service that fetches the overlay of the dev/test/prod and uses it as it's base.
However the issue is managing this resources since we don't want to share the dev repos to possible clients or other end users in order for them to deploy these services into their K8S environment but not giving them permissions will imply that they will not be able to fetch these overlays and bases and deploy them.
What is the best practice in order to have a protected and restrictive dev repos and yet be able to do the deployment operation?
I know this is a abstract question but I've never dealt with organization of repos in a scale like this.
To clarify I am posting Community Wiki answer.
The solution you suggested in comment's section:
We will have the deployments/namespaces/services manifests in the same repo as the application source code as well the an overlay with a customization with the necessary resources to fully deploy in the dev environment.
As for test/ prod environments we created a structure to add an overlay per app with the same resource files but with the env details in the files to be used as configmaps.
And a customization using the dev repository as the base. Unfortunately this will imply that the cluster admin will have access to all repos of an application.
Is it possible to use Gitlab's Dependency Proxy for a private project not in a group?
It seems wasteful to create a single group per project (as they may not be related) just to cache container images. Besides, what if the project needs to use images from multiple groups (without being associated with multiple groups)?
I am using a docker runner, and the docs describe a DOCKER AUTH_CONFIG variable, but:
Setting it in gitlab-ci.yaml would expose secrets in the repo
Setting it elsewhere means hard coding a username/password, which I'd rather avoid and use pre-existing variables such as CI_REGISTRY_USER and CI_JOB_TOKEN
I also thought about creating a Deploy Token in the group, but the docs also say those are only for projects within the group.