Argo CD and Kustomize - kustomize

all. I'm using Argo CD v1.6.1 and am trying to deploy an application using Kustomize. Argo CD doesn't seem to recognize my Kustomize manifest files. Looking at the Kustomize documentation on the Argo CD page, it looks like it only supports the following Kustomize options:
namePrefix is a prefix appended to resources for Kustomize apps
nameSuffix is a suffix appended to resources for Kustomize apps
images is a list of Kustomize image overrides
commonLabels is a string map of an additional labels
Are these the only things I'm going to be able to manipulate in my base manifest files using Kustomize? I was hoping I'd be able to use the patchesStrategicMerge option with my overlay files I've got which allow me to manipulate anything in the base.yaml files. It doesn't seem to recognize kind: Kustomization and apiVersion: kustomize.config.k8s.io/v1beta1
Thank you.

ArgoCD's main task is to deploy manifests. Kustomize is the right place for any more complex edits. It sounds like you already have an overlays structure in your kustomize application so the missing piece may be about pointing your Argo Application to the correct overlay.
Assuming you have a repo with the following structure:
repo
|_ app
|_ kustomize
|_ base
| |_ resource.yml
| |_ kustomization.yml
|_ overlays
|_ prod
|_ patch.yml
|_ kustomization.yml
Then you would want your Argo Application to have:
source:
repoURL: <REPO_URL>
targetRevision: <REVISION>
path: kustomize/overlays/prod
This would mean it is using your overlays kustomization file which should pull in your base kustomzition file and the patches.
The additional fields you have mentioned are like an extra overlay and would not be recommended to do more complex actions like a strategic merge.

Related

Modify the default created README.md in GitLab when a new project is created

I have a locally hosted GitLab omnibus instance running in a docker container.
When a project is created through the web front end, it creates a default README.md from somewhere. I'd like to modify that default file to better suit our project work flow and enable the engineer to fill out the necessaries about their project with some consistency.
files you are looking for are here:
embedded/service/gitlab-rails/app/views/projects/readme_templates/default.md.tt
embedded/service/gitlab-rails/app/experiments/templates/new_project_readme_content/readme_advanced.md.tt.

How can I use terraform to setup in multi regions?

I have a terraform configurations file without modules and it is hosted in production .
Currently I am trying to deploy it in another region using provider alias and modules .But when I plan the same , it says that I need to destroy my previous configuration and recreate it via modules
When I modularise the files are tehy treated by terraform as a new resource ?I have around 35 reources , it says 35 to destroy and 35 to create .In the process of modularising I removed the .terraform folder under the root and initialised it in the main module
Is this the expected behaviour?
Yes, it is the expected behaviour because the resources have now different Terraform identifiers. When Terraform runs plan or apply, it looks in the status file and cannot find the new identifiers listed there.
You can solve this situation using the terraform state mv command. I have used it successfully, but it is a tedious task and mistakes are easy. I recommend to make additional backups of your state file via -backup-out option and checking the plan after each run.
I would suggest that you look into terragrunt to make your code more dry. Once you've integrated with Terragrunt you can start using a hierarchy like this:
--root
terragrunt.hcl
|
--dev_account
account.hcl
|
--us-east-2
region.hcl
|
--services
--us-east-1
--us-west-2
--uat_account
|
--us-east-2
|
--services
--us-east-1
--us-west-2
--prod_account
|
--us-east-2
|
--services
--us-east-1
--us-west-2
Terragrunt will allow you to have one module and with a hierarchy similar to this, you'll be able to parse the account.hcl, and region.hcls to deploy to different regions without having to re-enter this information.
Terragrunt allows you to run an apply-all if you choose to apply all child terraform configurations, though I wouldn't suggest this for production.

Projects dependency configuration file generation via Helm with Azure Dev Spaces and Kubernetes

The projects below seems using Helm to define projects dependency on file requirements.yaml below, and azds.yaml and Chart project for each project.
https://github.com/Azure/dev-spaces/blob/master/samples/BikeSharingApp/charts/requirements.yaml
https://github.com/Azure/dev-spaces/tree/master/samples/BikeSharingApp/charts
My questions are:
1 Is my understanding above correct?
2 Is the whole folder, charts, above generated manually or via tool?
https://helm.sh/docs/developing_charts/
https://learn.microsoft.com/en-us/azure/dev-spaces/quickstart-team-development

Different environments for Terraform (Hashicorp)

I've been using Terraform to build my AWS stack and have been enjoying it. If it was to be used in a commercial setting the configuration would need to be reused for different environments (e.g. QA, STAGING, PROD).
How would I be able to achieve this? Would I need to create a wrapper script that makes calls to terraform's cli while passing in different state files per environment like below? I'm wondering if there's a more native solution provided by Terraform.
terraform apply -state=qa.tfstate
I suggest you take a look at the hashicorp best-practices repo, which has quite a nice setup for dealing with different environments (similar to what James Woolfenden suggested).
We're using a similar setup, and it works quite nicely. However, this best-practices repo assumes you're using Atlas, which we're not. We've created quite an elaborate Rakefile, which basically (going by the best-practices repo again) gets all the subfolders of /terraform/providers/aws, and exposes them as different builds using namespaces. So our rake -T output would list the following tasks:
us_east_1_prod:init
us_east_1_prod:plan
us_east_1_prod:apply
us_east_1_staging:init
us_east_1_staging:plan
us_east_1_staging:apply
This separation prevents changes which might be exclusive to dev to accidentally affect (or worse, destroy) something in prod, as it's a different state file. It also allows testing a change in dev/staging before actually applying it to prod.
Also, I recently stumbled upon this little write up, which basically shows what might happen if you keep everything together:
https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/
Paul's solution with modules is the right idea. However, I would strongly recommend against defining all of your environments (e.g. QA, staging, production) in the same Terraform file. If you do, then whenever you're making a change to staging, you risk accidentally breaking production too, which partially defeats the point of keeping those environments isolated in the first place! See Terraform, VPC, and why you want a tfstate file per env for a colorful discussion of what can go wrong.
I always recommend storing the Terraform code for each environment in a separate folder. In fact, you may even want to store the Terraform code for each "component" (e.g. a database, a VPC, a single app) in separate folders. Again, the reason is isolation: when making changes to a single app (which you might do 10 times per day), you don't want to put your entire VPC at risk (which you probably never change).
Therefore, my typical file layout looks something like this:
stage
└ vpc
└ main.tf
└ vars.tf
└ outputs.tf
└ app
└ db
prod
└ vpc
└ app
└ db
global
└ s3
└ iam
All the Terraform code for the staging environment goes into the stage folder, all the code for the prod environment goes into the prod folder, and all the code that lives outside of an environment (e.g. IAM users, S3 buckets) goes into the global folder.
For more info, check out How to manage Terraform state. For a deeper look at Terraform best practices, check out the book Terraform: Up & Running.
Please note that from version 0.10.0 now Terraform supports the concept of Workspaces (environments in 0.9.x).
A workspace is a named container for Terraform state. With multiple workspaces, a single directory of Terraform configuration can be used to manage multiple distinct sets of infrastructure resources.
See more info here: https://www.terraform.io/docs/state/workspaces.html
As you scale up your terraform usage, you will need to share state (between devs, build processes and different projects), support multiple environments and regions.
For this you need to use remote state.
Before you execute your terraform you need to set up your state.
(Im using powershell)
$environment="devtestexample"
$region ="eu-west-1"
$remote_state_bucket = "${environment}-terraform-state"
$bucket_key = "yoursharedobject.$region.tfstate"
aws s3 ls "s3://$remote_state_bucket"|out-null
if ($lastexitcode)
{
aws s3 mb "s3://$remote_state_bucket"
}
terraform remote config -backend S3 -backend-config="bucket=$remote_state_bucket" -backend-config="key=$bucket_key" -backend-config="region=$region"
#(see here: https://www.terraform.io/docs/commands/remote-config.html)
terraform apply -var='environment=$environment' -var='region=$region'
Your state is now stored in S3, by region, by environment, and you can then access this state in other tf projects.
There is absolutely no need for having separate codebases for dev and prod environments. Best practice dictates (I mean DRY) that actually you are better off having one code base and simply parametrize it as you would do normally when developing a software - you DONT have separate folders for development version of the application and for production version of the application. You only need to ensure right deployment scheme. The same goes with terraform. Consider this "Hello world" idea:
terraform-project
├── etc
│   ├── backend
│   │   ├── dev.conf
│   │   └── prod.conf
│   └── tfvars
│   ├── dev.tfvars
│   └── prod.tfvars
└── src
└── main.tf
contents of etc/backend/dev.conf
storage_account_name = "tfremotestates"
container_name = "tf-state.dev"
key = "terraform.tfstate"
access_key = "****"
contents of etc/backend/prod.conf
storage_account_name = "tfremotestates"
container_name = "tf-state.prod"
key = "terraform.tfstate"
access_key = "****"
contents of etc/tfvars/dev.tfvars
environment = "dev"
contents of etc/tfvars/prod.tfvars
environment = "prod"
contents of src/main.tf
terraform {
backend "azurerm" {
}
}
provider "azurerm" {
version = "~> 2.56.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "rg-${var.environment}"
location = "us-east"
}
Now you only have to pass apropriate value to cli invocation, eg.:
export ENVIRONMENT=dev
terraform init -backend-config=etc/backends/${ENVIRONMENT}.conf
terraform apply -vars-file=etc/tfvars/${ENVIRONMENT}.tfvars
This way:
we have separate state files for each environment (so they can even be deployed in different subscriptions/accounts)
we have the same code base, so we are sure the differences between dev and prod are small and we can rely on dev for testing purposes before going live
we follow DRY directive
we follow KISS directive
no need to use obscure "workspaces" interface!
Of course in order for this to be fully secure you should incorporate some kind of git flow and code review, perhaps some static or integration testing, automatic deployment process, etc. etc.. But I consider this solution as the best approach to having multiple terraform environments without duplicating code and it worked for us very nicely for a couple of years now.
No need to make a wrapper script. What we do is split our env into a module and then have a top level terraform file where we just import that module for each environment. As long as you have your module setup to take enough variables, generally env_name and a few others, you're good. As an example
# project/main.tf
module "dev" {
source "./env"
env = "dev"
aws_ssh_keyname = "dev_ssh"
}
module "stage" {
source "./env"
env = "stage"
aws_ssh_keyname = "stage_ssh"
}
# Then in project/env/main.tf
# All the resources would be defined in here
# along with variables for env and aws_ssh_keyname, etc.
Edit 2020/03/01
This answer is pretty old at this point, but it's worth updating. The critique that dev and stage sharing the same state file being bad is a matter of perspective. For the exact code provided above it's completely valid because dev and stage are sharing the same code as well. Thus "breaking dev will wreck your stage," is correct. The critical thing that I didn't note when writing this answer was the source "./env" could also be written as source "git::https://example.com/network.git//modules/vpc?ref=v1.2.0"
Doing that makes your entire repo become something of a submodule to the TF scripts allowing you to split out one branch as your QA branch and then tagged references as your Production envs. That obviates the problem of wrecking your staging env with a change to dev.
Next state file sharing. I say that's a matter of perspective because with one single run it's possible to update all your environments. In a small company that time savings when promoting changes can be useful, some trickery with --target is usually enough to speed up the process if you're careful, if that's even really needed. We found it less error prone to manage everything from one place and one terraform run, rather than having multiple different configurations possibly being applied slightly differently across the environments. Having them all in one state file forced us to be more disciplined about what really needed to be a variable v.s. what was just overkill for our purposes. It also very strongly prevented us from allowing our environments to drift too far apart from each other. When you terraform plan outputs show 2k lines, and the differences are mainly because dev and stage look nothing like prod the frustration factor alone encouraged our team to bring that back to sanity.
A very strong counter argument to that is if you're in a large company where various compliance rules prevent you from touching dev / stage / prod at the same time. In that scenario it's better to split up your state files, just make sure that how you're running terraform apply is scripted. Otherwise you run the very real risk of those state files drifting apart when someone says "Oh I just need to --target just this one thing in staging. We'll fix it next sprint, promise." I've seen that spiral quickly multiple times now, making any kind of comparison between the environments questionable at best.
Form terraform version 0.10+ There is a way to maintain the state file using Workspace command
$ terraform workspace list // The command will list all existing workspaces
$ terraform workspace new <workspace_name> // The command will create a workspace
$ terraform workspace select <workspace_name> // The command select a workspace
$ terraform workspace delete <workspace_name> // The command delete a workspace
First thing you need to do is to create each workspace for your environment
$ terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
$terraform workspace new test
Created and switched to workspace "test"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
$terraform workspace new stage
Created and switched to workspace "stage"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
backend terraform.tfstate.d directory will be created
under them you can see 3 directory - dev, test,stage and each will maintain its state file under its workspace.
all that now you need to do is to move the env-variable files in another folder
keep only one variable file for each execution of terraform plan , terraform apply
main.tf
dev_variable.tfvar
output.tf
Remember to switch to right workspace to use the correct environment state file
$terraform workspace select test
main.tf
test_variable.tfvar
output.tf
Ref : https://dzone.com/articles/manage-multiple-environments-with-terraform-worksp
There are plenty of good answers in this thread. Let me contribute as well with an idea that worked for me and some other teams.
The idea is to have a single "umbrella" project that contains the whole infrastructure code.
Each environment's terraform file includes just a single module - the "main".
Then "main" will include resources and other modules
- terraform_project
- env
- dev01 <-- Terraform home, run from here
- .terraform <-- git ignored of course
- dev01.tf <-- backend, env config, includes always _only_ the main module
- dev02
- .terraform
- dev02.tf
- stg01
- .terraform
- stg01.tf
- prd01
- .terraform
- prd01.tf
- main <-- main umbrella module
- main.tf
- variables.tf
- modules <-- submodules of the main module
- module_a
- module_b
- module_c
And a sample environment home file (e.g. dev01.tf) will look like this
provider "azurerm" {
version = "~>1.42.0"
}
terraform {
backend "azurerm" {
resource_group_name = "tradelens-host-rg"
storage_account_name = "stterraformstate001"
container_name = "terraformstate"
key = "dev.terraform.terraformstate"
}
}
module "main" {
source = "../../main"
subscription_id = "000000000-0000-0000-0000-00000000000"
project_name = "tlens"
environment_name = "dev"
resource_group_name = "tradelens-main-dev"
tenant_id = "790fd69f-41a3-4b51-8a42-685767c7d8zz"
location = "West Europe"
developers_object_id = "58968a05-dc52-4b69-a7df-ff99f01e12zz"
terraform_sp_app_id = "8afb2166-9168-4919-ba27-6f3f9dfad3ff"
kubernetes_version = "1.14.8"
kuberenetes_vm_size = "Standard_B2ms"
kuberenetes_nodes_count = 4
enable_ddos_protection = false
enable_waf = false
}
Thanks to that you:
Can have separate backends for Terraform remote state-files per environment
Be able to use separate system accounts for different environments
Be able to use different versions of providers and Terraform itself per environment (and upgrade one by one)
Ensure that all required properties are provided per environment (Terraform validate won't pass if an environmental property is missing)
Ensure that all resources/modules are always added to all environments. It is not possible to "forget" about a whole module because there is just one.
Check a source blog post
Hashicorp recommends different statefiles and folders like this:
├── assets
│ ├── index.html
├── prod
│ ├── main.tf
│ ├── variables.tf
│ ├── terraform.tfstate
│ └── terraform.tfvars
└── dev
├── main.tf
├── variables.tf
├── terraform.tfstate
└── terraform.tfvars
There's even documentation on how to refactor monolithic configuration to support multiple environments according to their best practice. Check it out here:
https://learn.hashicorp.com/tutorials/terraform/organize-configuration#variables-tf
Example Run multiple examples
├── env_vars
│   └── qa.tfvars
├── main.tf
├── outputs.tf
├── terraform.tfstate
├── terraform.tfstate.backup
├── _tools
│   └── apply.sh
└── variables.tf
You can run
#!/bin/bash
echo "Enter your environment (qa,dev,stage or prod)"
read environment
rm -Rf .terraform/
terraform init -var-file=env_vars/$environment.tfvars #-backend-config="key=$environment/$environment.tf" -backend-config="bucket=<bucket_name>"
terraform apply -var-file=env_vars/$environment.tfvars

Creating multiple svn repos on single SVN Server

I got SVN Server setup for Project A which is all good.
I now need to setup another repository in the same server for Project B.
Any guidelines please?
Creating a sub-folder as Arpit suggests is definitely not the solution you're looking for, since both projects would be part of a single SVN repository, hence, sharing revision numbers.
It is definitely possible to have several repositories in a single server, however there are many ways to do it depending on your setup.
First you might want to study these documents to get your concepts straight.
http://svnbook.red-bean.com/en/1.7/svn-book.pdf
http://tortoisesvn.net/support.html
In a Windows Setup, for example, VisualSVN, through its managing tool, VisualSVN Server Manager, allows you to easily create and manage repositories in a straightforward way.
It is important that you undrstand the difference between repository as a general concept, and repository as a technical term in SVN. The cited documentation will assit you in this.
If you are using Apache, you can support multiple repositories easily with SVNParentPath configuration. See
http://svnbook.red-bean.com/en/1.5/svn.serverconfig.httpd.html#svn.serverconfig.httpd.basic for details.
You have installed a server and setup a project in it, I believe in the following manner:
http://svn.exampel.com/repos/
|
|- ProjectA
|
|- branches
|- tags
|- trunk
To add another repository in the same server, just go to the root directory and create another folder structure like this:
http://svn.exampel.com/repos/
|
|- ProjectA
|
|- branches
|- tags
|- trunk
|
|- ProjectB
|
|- branches
|- tags
|- trunk
Simple!. There is no other configuration invoved and you can start using them as seperate repositories. This is called a Multi-Repisotory Layout.
NOTE: The revision number will be shared with both of these, i.e., the revision number is for a complete tree and there will be a unified revision number for both the projects

Resources