Setting up Istio with Terraform, not using Helm - terraform

So for background, I am trying to deploy a containerized webapp inside a kubernetes cluster, which is secured and monitored by istio ft kiali. As I do not want to configure everything by hand I am using Terraform to deploy and update any configurations inside the cluster (like deploying services and pods).
They benefit is that Terraform automatically configures the services needed to expose the apps which safes a lot of hassle, especially because this is a pilot project for a larger deployment of that sort.
The problem now is that Terraform does not include Istio as a provider. There is a way to install and configure it by writing the config inside Terraform, which uses Helm, which configures Istio, but Helm is using the Helm Tiller, a permission-elevated pod which executes given tasks. I do not want a permission-elevated pod inside my cluster due to large scale security concerns.
The question now is: Has someone tried or managed to successfully configure the Istio Services like a VirtualService to expose the webapp through the istio-ingressgateway with a Terraform config file? I googled it but there is little to be seen for the combination of those two.

Terraform now has an official Helm provider https://registry.terraform.io/providers/hashicorp/helm/latest/docs
You can use that provider and install Istio with helm https://istio.io/latest/docs/setup/install/helm/
You can use Kubernetes provider to configure Istio objects.
Refer https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs and https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-kubernetes-provider-for-hashicorp-terraform
PS: Doing it via Pulumi might be easier checkout https://www.pulumi.com/docs/get-started/kubernetes/

Related

Do I really need kubeadm on a managed cloud cluster?

I am fiddling around with Kubernetes on a small managed cluster within AKS.
It looks like I'm ready to go with deploying as my node pools are already provisioned and bootstrapped (or that's what it looks like) upon setup.
Am I missing something here?
Do I really need kubeadm on a managed cloud cluster?
You DO NOT need kubeadm tool when using Azure AKS / AWS EKS / Google GKE managed Kubernetes clusters.
kubeadm is used to create a self-managed Kubernetes cluster.
You can use the kubeadm tool to create and manage Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way.

Restricting allowed kubernetes types to be deployed with ArgoCD

We'd like to allow our developers to automatically deploy their changes to a kubernetes cluster by merging their code and k8s ressources in a git repo which is watched by ArgoCD. The release management teams would responsible to manage the ArgoCD config and setting up new apps as well as for creation of namespaces, roles and role bindings on the cluster while the devs should be able to deploy their applications through GitOps without the need to interact with the cluster directly. Devs might have read access on the cluster for debugging purposes.
Now the question: in theory it would be possible that a dev would create a new yaml and specify a rolebinding ressource, which binds his/her account to a cluster admin role. As ArgoCD has cluster admin rights, this would be a way to escalate privileges for the dev. (or an attacker impersonating a developer)
Is there a way to restrict, which k8s ressources are allowed to be created through ArgoCD.
EDIT:
According to the docs, this is possible per project using clusterResourceWhitelist.
Is it possible to do that globally?
You are right about Argo CD project. The project CRD supports allowing/denying K8S resources using clusterResourceWhitelist, clusterResourceBlacklist etc fields. The sample project definition is also available in Argo CD documentation.
In order to restrict the list of managed resources globally, you can specify the resource.exclusions/resource.inclusions field in the argocd-cm ConfigMap. The example is available here.

How set up VNet integration and Access Restriction for Azure WebApp from ansible

Pls help !
I need create some web app from ansible, and add this web app to an existing VNet.
For create WebApp i use ansible module - azure_rm_webapp, and all work fine but i can`t find any way configuring network for created web app.
Can i do it from ansible?
How can i do this ?
I read about ansible module for creating network (may be there i can add , but also not find )
I am afraid that we can not do VNet integration using ansible module. Here are all modules for Azure by using ansible and ansible-playbooks samples.
In this case, you may consider using some shell, script, or win_shell modules to execute some commands on target hosts after the VM provision. Here is an arm template for deploying a web app with VNet integration.
If you consider other automatic deployment tools. I would like to recommend the terraform tool. it supports resource azurerm_app_service_virtual_network_swift_connection which makes you manage an App Service Virtual Network Association. Also, you can use ip_restriction block and scm_ip_restriction block under the resource azurerm_app_service to configure Access restriction.

Running Kubernetes across cloud providers

Our goal is to run kubernetes in AWS and Azure with minimal customization (setting up kubernetes managed env), support and maintenance. We need portability of containers across cloud providers.
Our preferred cloud provider is AWS. We are planning on running containers in EKS. We wanted to understand the customization effort required to run these containers in AKS.
Would you recommend choosing a container management platform like Pivotal Cloud Foundry or Redhat OpenShift or run them on AWS EKS or AKS where customization is less to run containers across different cloud providers.
You need to define a common set of storage classes that map to similar volume types on each provider. If you are using some kind of provider based Ingress controller those can vary so I would recommend using an internal one like nginx or traefik. If you are using customization annotations for things like networking those can vary, but using those is pretty rare. Others k8s is k8s.

Using Terraform to create a Service Fabric cluster issues

I am trying to use Terraform to create a Service fabric cluster in Azure.
I have created configurations for the follwoing resources using a template provided by Tvo https://github.com/TrevorVonSeggern/ServiceFabric_Terraform
This will create the reasorces in Azure however the SFC just sits on "Deploying" and the Nodes themselves never display.
There seems to be a distinct lack of configuration resources for creating a Service fabric cluster using Terraform and HashiCorp's documentation on this resource example is not as in depth as for other resources.
Provisioning with Powershell is easier as more resources to guide.
If anyone has any working examples please can you share them?
Thanks
I have managed to deploy this successfully by deploying and then going through the extensions in the ARM template. Then adding (in JSON string) in the Terraform config for VMSS
Could not find anywhere in the Terraform documentation on this resource to assist with this.

Resources