GKE Cluster Audit - security

What are the points to be reviewed while auditing a GKE cluster?
We have a production cluster and I would like to what all points need to be reviewed while auditing my GKE cluster. What needs to be configured/removed for better security and HA.

This is a very broad topic.
Short answer(Main points):
Apply Least privilege principle for IAM entities and RBAC entities
Enable binary authorizarion
Limit privileges on Containers
Enable image scanner
Use the Secret Manager
Create private clusters when possible
Spread your work nodes between AZs
But I strongly recommend you verify Google official docs:
https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview#node_upgrades
See ya

Related

Can I share a k8s cluster securely between many DevOps product teams?

would there be a secure way to work with different product DevOps teams on the same k8s cluster? How can I isolate workloads between the teams? I know there is k8s rbac and namespaces available, but is that secure to run different prod workloads? I know istio but as I understood there is no direct answer to my Südasien. How can we handle different in ingress configuration from different teams in the same cluster? If not securely possible to isolate workloads how do you orchestrate k8s clusters to reduce maintenance.
Thanks a lot!
The answer is: it depends. First, Kubernetes is not insecure by default and containers give a base layer of abstraction. The better questions are:
How many isolation do you need?
Whats about user management?
Do you need to encrypt traffic between your workload?
Isolation Levels
If you need strong isolation between your workloads (and i mean really strong), do yourself a favor and use different clusters. There may be some business cases where you need guarantee that some kind of workload is not allowed to run on the same (virtual) machine. You could also try to do this by adding nodes that are only for one of your sub-projects and use Affinities and Anti-Affinities to handle the scheduling. But if need this level of isolation, you'll probably ran into problems when thinking about log aggregation, metrics or in general any point where you have a component that's used across all of your services.
For any other use case: Build one cluster and divide by namespaces. You could even create a couple ingress-controllers which belong just to one of your teams.
User Management
Managing RBAC and users by hand could be a little bit tricky. Kubernetes itself supports OIDC-Tokens. If you already use OIDC for SSO or similar, you could re-use your tokens to authenticate users in Kubernetes. I've never used this, so i can't tell about role mapping using OIDC.
Another solution would be Rancher or another cluster orchestrating tool. I can't tell about the other, but Rancher comes with built-in user management. You could also create projects to group several namespaces for one of your audiences.
Traffic Encryption
By using a service mesh like Istio or Linkerd you can encrypt traffic between your pods. Even if it sounds seductive to encrypt your workload, make clear if you really need this. Service meshes come with some downsides, e.g. resource usage. Also you have one more component that needs to be managed and updated.

can I add agent/extension to aks node in Azure?

nodes in nodepool have default agents installed.
Questions :
1. can I add agent/extensions on node ?
2. if yes, is it recommended/compatible ?
coulnt find anything on Google about aks node extensions
Manually extension is not supported for AKS nodes. Take a look at the troubleshooting here:
AKS is a managed service, and manipulation of the IaaS resources is
not supported. To install custom components, etc. please leverage the
kubernetes APIs and mechanisms. For example, leverage DaemonSets to
install required components.
If you want to manage the nodes as you want, you'd better use the aks-engine. Maybe it's a better choice for you.

AKS cross regional Network policy

We are planning to build a AKS cluster in HA cross regional using Azure Traffic manager. We intend to apply certain network policies on one region and same needs to be replicated on the other region.
How can we ensure replication across the multi region ?
Additionally how can we ensure Storage isolation in AKS region?
Any leads would be appreciated.
As far as I know kubernetes doesnt offer any sync engine. You can use flux or any other gitops tool to sync configs between clusters (or rather apply the same config to clusters).

How to design Azure HDInsights Cluster

I have a query on AZURE HDInsights. How do I need to design AZURE HDInsights Cluster according to my on-premises infrastructure ?
What are the major parameters which I need to consider before designing the cluster ?
(For Example) If I have 100 servers running on-premises, how many nodes I need to select in my Cloud Cluster like that. ?!! In AWS we have EMR sizing calculator and Cluster Planner/Advisor. Do we have anything similar planning mechanism in AZURE apart from Pricing Calculator ? Please clarify and provide your inputs. With Any example will be really great. Thanks.
Before deploying an HDInsight cluster, plan for the desired cluster capacity by determining the needed performance and scale. This planning helps optimize both usability and costs. Some cluster capacity decisions cannot be changed after deployment. If the performance parameters change, a cluster can be dismantled and re-created without losing stored data.
The key questions to ask for capacity planning are:
In which geographic region should you deploy your cluster?
How much storage do you need?
What cluster type should you deploy?
What size and type of virtual machine (VM) should your cluster nodes use?
How many worker nodes should your cluster have?
Each and every question is addressed here: "Capacity planning for HDInsight Clusters".

ArangoDB managed services

I am doing a research about some needs in a database and I really liked ArangoDB, The only issue is that I couldn't find any managed services or managed hosts for ArangoDB.
For an example in Amazon AWS services the RDS allows us to easily to scale up, without worrying about the clustering and configuration.
Is there any service that can manage this for me, or should I manage this myself?
You may start an arangodb cluster on AWS with Mesosphere DC/OS. The cluster is fully managed and can be scaled as you go. It is documented here:
https://docs.arangodb.com/3.2/Manual/Deployment/Mesos.html

Resources