I need to create a setup with terraform and aws where i could have the traditional Dev, Stage and Prod environments. I also need to allow creation of custom copies of individual services in those environments for developers to work on. I will use workspaces for Dev, Stage and Prod but can't figure out how to achieve these custom copies of services in environments.
An example would be a lambda in dev environment that would be called "test-lambda-dev". It would be deployed with the dev environment. The developer with initials "de" would need to work on some changes in this lambdas code and would need a custom version of it deployed alongside the regular one. So he would need to create a "test-lambda-dev-de".
I have tried to solve it by introducing a sufix after the resources name that would normaly be empty but someone could provide a value signalling that a cuwtom version is needed. Terraform destroyed the regular resource and replaced it with the custom one so it has not worked as intended.
Is there a way to get it to work? If not how can developers work on resources without temporarily breaking the environment?
Related
I have a couple apps that use the same GCP project. There are dev, stage, and prod projects, but they're basically the same, apart from project IDs and project numbers. I would like to have a repo in gitlab like config where I keep these IDs, in a dev.tfvars, stage.tfvars, prod.tfvars. Currently each app's repo has a directory of config/{env}.tfvars, which is really repetitive.
Googling for importing or including terraform resources is just getting me results about terraform state, so hasn't been fruitful.
I've considered:
Using a group-level Gitlab variable just as key=val env file and have my gitlab-ci yaml source the correct environment's file, then just include what I need using -var="key=value" in my plan and apply commands.
Creating a terraform module that either uses TF_WORKSPACE or an input prop to return the correct variables. I think this may be possible, but I'm new to TF, so I'm not sure how to return data back from a module, or if this type of "side-effects only" solution is an abusive workaround to something there's a better way to achieve.
Is there any way to include terraform variables from another Gitlab project?
I am using a terraform enterprise instance to manage three workspaces that represent infrastructure for the various environments of an application (development, pre-prod and prod have isolated infrastructure). The workspaces themselves are configured using a tfe_workspace resource.
I'm using a VCS-driven flow to create the configuration versions as I need speculative plan runs on PRs and I'm fine with automatic runs being created for master. I'm using the API to determine when to apply runs so that the staging environment can be applied and have automated tests run against it before the production workspace run is applied.
This works fairly well, except that I have been unable to use the api to apply non-default-branch configuration versions (i.e. from a PR) to the development workspace. Any run I create using a configuration version that was not created from the master branch does creates a plan-only run.
Is there a way via the Terraform Enterprise API to apply a PR configuration version?
I was able to work around this by not reusing the PR's configuration version, and creating one of my own via the API instead.
I'm coming from a long SSIS background, we're looking to use Azure data factory v2 but I'm struggling to find any (clear) way of working with multiple environments. In SSIS we would have project parameters tied to the Visual Studio project configuration (e.g. development/test/production etc...) and say there were 2 parameters for SourceServerName and DestinationServerName, these would point to different servers if we were in development or test.
From my initial playing around I can't see any way to do this in data factory. I've searched google of course, but any information I've found seems to be around CI/CD then talks about Git 'branches' and is difficult to follow.
I'm basically looking for a very simple explanation and example of how this would be achieved in Azure data factory v2 (if it is even possible).
It works differently. You create an instance of data factory per environment and your environments are effectively embedded in each instance.
So here's one simple approach:
Create three data factories: dev, test, prod
Create your linked services in the dev environment pointing at dev sources and targets
Create the same named linked services in test, but of course these point at your tst systems
Now when you "migrate" your pipelines from dev to test, they use the same logical name (just like a connection manager)
So you don't designate an environment at execution time or map variables or anything... everything in test just runs against test because that's the way the linked servers have been defined.
That's the first step.
The next step is to connect only the dev ADF instance to Git. If you're a newcomer to Git it can be daunting but it's just a version control system. You save your code to it and it remembers every change you made.
Once your pipeline code is in git, the theory is that you migrate code out of git into higher environments in an automated fashion.
If you go through the links provided in the other answer, you'll see how you set it up.
I do have an issue with this approach though - you have to look up all of your environment values in keystore, which to me is silly because why do we need to designate the test servers hostname everytime we deploy to test?
One last thing is that if you a pipeline that doesn't use a linked service (say a REST pipeline), I haven't found a way to make that environment aware. I ended up building logic around the current data factories name to dynamically change endpoints.
This is a bit of a bran dump but feel free to ask questions.
Although it's not recommended - yes, you can do it.
Take a look at Linked Service - in this case, I have a connection to Azure SQL Database:
You have possibilities to use dynamic content for either the server name and database name.
Just add a parameter to your pipeline, pass it to the Linked Service and use in the required field.
Let me know whether I explained it clearly enough?
Yes, it's possible although not so simple as it was in VS for SSIS.
1) First of all: there is no desktop application for developing ADF, only the browser.
Therefore developers should make the changes in their DEV environment and from many reasons, the best way to do it is a way of working with GIT repository connected.
2) Then, you need "only":
a) publish the changes (it creates/updates adf_publish branch in git)
b) With Azure DevOps deploy the code from adf_publish replacing required parameters for target environment.
I know that at the beginning it sounds horrible, but the sooner you set up an environment like this the more time you save while developing pipelines.
How to do these things step by step?
I describe all the steps in the following posts:
- Setting up Code Repository for Azure Data Factory v2
- Deployment of Azure Data Factory with Azure DevOps
I hope this helps.
I'm evaluating gitlab community edition as a self hosted version. Everything is fine with the product except anyone who has access to pipelines (master or admin) can run deployment on production.
I saw their issue board and i see that this is a feature that will not be coming to gitlab anytime soon.
See https://gitlab.com/gitlab-org/gitlab-ce/issues/20261
For now, I plan on deploying my spring boot applications using the following strategy.
A separate runner installed on the production server
There will be an install script with instructions some where in production server
gitlab-runner user will only have permissions to run the above specific script (Somehow).
Have validations in the script for GITLAB_USER_NAME variable.
But I also see that there are disadvantages in this approach.
GITLAB_USER_NAME is an environment variable which can be overridden easily thus compromising the validation.
Things are complicated when introducing new prod servers.
Too many changes apart from .gitlab-ci.yml... CI/CD was supposed to be simple not painful...
Do I have any alternate approaches or hacks for this...?
I'd like to setup Terraform to manage dev/stage/prod environments. The infrastructure is the same in all environments, but there are differences in the variables in every environment.
What does an ideal Terraform project structure look like now that workspaces have been introduced in Terraform 0.10? How do I reference the workspace when naming/tagging infrastructure?
I wouldn't recommend using workspaces (previously 'environments') for static environments because they add a fair bit of complexity and are harder to keep track of.
You could get away with using a single folder structure for all environments, use workspaces to separate the environments and then use conditional values based on the workspace to set the differences. In practice (and especially with more than 2 environments leading to nested ternary statements) you'll probably find this difficult to manage.
Instead I'd still advocate for separate folders for every static environment and using symlinks to keep all your .tf files the same across all environments and a terraform.tfvars file to provide any differences at each environment.
I would recommend workspaces for dynamic environments such as short lived review/lab environments as this allows for a lot of flexibility. I'm currently using them to create review environments in Gitlab CI so every branch can have an optionally deployed review environment that can be used for manual integration or exploratory testing.
In the old world you might have passed in the var 'environment' when running terraform, which you would interpolate in your .tf files as "${var.environment}".
When using workspaces, there is no need to pass in an environment variable, you just make sure you are in the correct workspace and then interpolate inside your .tf files with "${terraform.workspace}"
As for how you'd manage all of the variables, i'd recommend using a varmap, like so:
variable "vpc_cidr" {
type = "map"
default = {
dev = "172.0.0.0/24"
preprod = "172.0.0.0/24"
prod = "172.0.0.0/24"
}
}
This would then be referenced in a aws_vpc resource using a lookup
"${lookup(var.vpc_cidr, terraform.workspace)}"
The process of creating and selecting workspaces is pretty easy:
terraform workspace
Usage: terraform workspace
Create, change and delete Terraform workspaces.
Subcommands:
show Show the current workspace name.
list List workspaces.
select Select a workspace.
new Create a new workspace.
delete Delete an existing workspace.
so to create a new workspace for pre production you'd do the following:
terraform workspace new preprod
and if you ran a plan, you'd see that there should be no resources. What this will do in the backend is create a new folder to manage the state for 'preprod'.