Is it possible to compose a Kustomize file? - kustomize

I would like to define the yaml for a particular container in one file, then pull that container config into a deployment config when the depyment is built. Is it possible to do this, since a lone container is not a Kubernetes resource?

Currently, it is not possible.
Patches won't work since the path of the podTemplate is not always the same across all workload resource types.
Variables don't support whole objects at the moment (I believe there was some talk about it on github), but strings only.
On some versions of k8s you could use something called a PodPreset, but it doesn't seem to be supported anymore.
The best you can do to avoid discrepancies between your podTemplates across all your resources would be to isolate the parts that are similar and/or error prone and use specific transformers/generators.
For example use the images transformer to ensure that all your pods use the same image. Use a ConfigMap generator and envFrom to keep all the environment variables are the same.

Related

Terraform multi-stage resource initialization with temporary resources

I use terraform to initialize some OpenStack cloud resources.
I have a scenario where I would need to initialize/prepare a volume disk using a temporary compute resource. Once volume is fully initialized, I would no longer need the temporary compute resource but need to attach to another compute resource (different network configuration and other settings making reuse of first impossible). As you might have guessed, I cannot reach directly the expected long term goal without the intermediary step.
I know I could drive a state machine or some sort of processing queue from outside terraform to achieve this, but I wonder if it was possible to do it nicely in one single run of terraform.
The best I could think of, is that a main terraform script would trigger creation/destruction of the intermediate compute resource by launching a another terraform instance responsible just for the intermediate resources (using terraform apply followed by terraform destroy). However it requires extra care such as ensuring unique folder to deal with concurrent "main" resource initialization and makes the whole a bit messy I think.
I wonder if it was possible to do it nicely in one single run of terraform.
Sadly, no. Any "solution" which you could possibly implement for that (e.g. running custom scripts through local-exec, etc) in a single TF will only be convoluted mess, and will only lead to more issues that it solves in the long term.
The proper way, as you wrote, is to use dedicated CI/CD pipeline for a multistage deployment. Alternatively, don't use TF at all, and use other IaC tool.

Terraform download current infastructure into tf file

I am migrating some manually provisioned infastructure over to terraform.
Currently I am manually defining the terraform resource's in .tf files, importing the remote state with terraform import. I then run terraform plan multiple times, each time modifying the local tf files until the match the existing infastructure.
How can I speed this process up by downloading the remote state directly into a .tf resource file?
The mapping from the configuration as written in .tf files to real infrastructure that's indirectly represented in state snapshots is a lossy one.
A typical Terraform configuration has arguments of one resource derived from attributes of another, uses count or for_each to systematically declare multiple similar objects, and might use Terraform modules to decompose the problem and reuse certain components.
All of that context is lost in the mapping to real remote objects, and so there is no way to recover it and generate idiomatic .tf files that would be ready to use. Therefore you would always need to make some modifications to the configuration in order to produce a useful Terraform configuration.
While keeping that caveat in mind, you can review the settings for objects you've added using terraform import by running the terraform show command. Its output is intended to be read by humans rather than machines, but it does present the information using a Terraform-language-like formatting, and so what it produces can potentially be a starting point for a Terraform configuration, with the caveat that it won't always be totally valid and will typically need at least some adjustments in order to be accepted by terraform plan as valid, and to be useful for ongoing use.

Terraform Best Practice Multi-Environment, Modules, and State

Enviornment Isolation: Dirs v. Workspaces v. Modules
The Terraform docs Separate Development and Production Environments seem to take two major approaches for handling a "dev/test/stage" type of CI enviornment, i.e.
Directory seperation - Seems messy especially when you potentially have multiple repos
Workspaces + Different Var Files
Except when you lookup workspaces it seems to imply workspaces are NOT a correct solution for isolating enviornments.
In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.
Instead, use one or more re-usable modules to represent the common elements, and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend. In that case, the root module of each configuration will consist only of a backend configuration and a small number of module blocks whose arguments describe any small differences between the deployments.
I would also like to consider using remote state -- e.g. azurerm backend
Best Practice Questions
When the docs refer to using a "re-usable" module, what would this look like if say I had na existing configuration folder? Would I still need to create a sepreate folder for dev/test/stage?
When using remote backends, should the state file be shared across repos by default or separated by repo and enviornment?
e.g.
terraform {
backend "azurerm" {
storage_account_name = "tfstorageaccount"
container_name = "tfstate"
key = "${var.enviornment}.terraform.tfstate"
}
}
vs.
terraform {
backend "azurerm" {
storage_account_name = "tfstorageaccount"
container_name = "tfstate"
key = "cache_cluster_${var.enviornment}.terraform.tfstate"
}
}
When the docs refer to using a "re-usable" module, what would this look like if say I had na existing configuration folder? Would I still need to create a sepreate folder for dev/test/stage?
A re-usable module for your infrastructure would essentially encapsulate the part of your infrastructure that is common to all your "dev/test/stage" environments. So no, you wouldn't have any "dev/test/stage" folders in there.
If, for example, you have an infrastructure that consists of a Kubernetes cluster and a MySQL database, you could have two modules - a 'compute' module that handles the k8s cluster, and a 'storage' module that would handle the DB. These modules go into a /modules subfolder. Your root module (main.tf file in the root of your repo) would then instantiate these modules and pass the appropriate input variables to customize them for each of the "dev/test/stage" environments.
Normally it would be a bit more complex:
Any shared VPC or firewall config might go into a networking module.
Any service accounts that you might automatically create might go into a credentials or iam module.
Any DNS mappings for API endpoints might go into a dns module.
You can then easily pass in variables to customize the behavior for "dev/test/stage" as needed.
When using remote backends, should the state file be shared across repos by default or separated by repo and enviornment?
Going off the Terraform docs and their recommended separation:
In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls.
You would not share tfstorageaccount. Now take this with a grain of salt and determine your own needs - essentially what you need to take into account is the security and data integrity implications of sharing backends/credentials. For example:
How sensitive is your state? If you have sensitive variables being output to your state, then you might not want your "production" state sitting in the same security perimeter as your "test" state.
Will you ever need to wipe your state or perform destructive actions? If, for example, your state storage provider only versions by folder, then you probably don't want your "dev/test/stage" states sitting next to each other.

How to store and update a file in a docker container which is read by a SpringBoot application?

I'm setting up an environment which could contain multiple docker container. Each container contains the same SpringBoot Application. During the runtime of the SpringBoot application an .ini-file is needed to work through different things. Furthermore the .ini-file might be updated from outside the containers. This update or new .ini-file should be distributed among all other containers so that it is available at the other SpringBoot apps at the end. Distributing the file is not the problem at this point but how to store the file because the classpath can't be used.
I'm using hazelcast to use its cluster feature. With the help of it I'm able to distribute the new file over all other members in the cluster. At the beginning I stored the .ini-file within the classpath. But if the .ini-file changes it makes no sense to have it in the classpath because you cannot write within a jar. Also, if the container goes down, the memory of the hazelcast is lost because it only has a in-memory database.
What I expect is a process where I can easily substitute the .ini-file. For example a container already knows the file (all newer versions of the .ini-file will have the same name) during build or something like that. If the container was down, it is able to find the file by itself again. And, as I already mentioned, I need to change the .ini-file during runtime. Then the container, or to be more specific, the SpringBoot app has to recognize this change automatically. In my opinion a changing of the file could be done via a REST call which stores the file anywhere within the container or a place where it is allowed to write because classpath doesn't work.
As your question is holding a tag "kubernetes", I will try to answer you in context of this specific container orchestrator.
The feature you are looking for is called ConfigMap in Kubernetes.
Think of it as a key-value pairs created from data source (in your case ini config file).
kubectl create configmap game-config --from-file=.ini-file
You can then use ConfigMap data in two ways inside your containers:
As a container environment variables
As a populated Volume, mounted inside container under specific path
Important thing to note here is, that mounted ConfigMaps are updated automatically. If you are interested in this concept please read more about it here.

How to mount a file and access it from application in a container kubernetes

I am looking for a best solution for a problem where lets say an application has to access a csv file (say employee.csv) and does some operations such as getEmployee or updateEmployee etc.
Which Volume is best suitable for this and why?
Please note that employee.csv will have some pre-loaded data already.
Also to be precise we are using azure-cli for handling kubernetes.
Please Help!!
My first question would be: is your application meant to be scalable (i.e. have multiple instances running at the same time)? If that is the case, then you should choose a volume that can be written by multiple instances at the same time (ReadWriteMany, https://kubernetes.io/docs/concepts/storage/persistent-volumes/). As you are using Azure, the AzureFile volume could fit your case. However, I am concerned that there could be a conflict with multiple writers (and some data may be lost). My advice would be to better use a Database System so you avoid this kind of situations.
If you only want to have one writer, then you could use pretty much any of them. However, if you use local volumes you could have problems when a pod get rescheduled on another host (it would not be able to retrieve the data). Given the requirements that you have (a simple csv file), the reason I would give you for using one PersistentVolume provider instead of another would be the less painful to setup. In this sense, just like before, if you are using Azure you could simply use an AzureFile volume type, as it should be more straightforward to configure in that cloud: https://learn.microsoft.com/en-us/azure/aks/azure-files

Resources