Kustomize: no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1" - kustomize

I am new to Kustomize and am getting the following error:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
but I am using the boilerplate kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
Question: What does the group name (kustomize.config.k8s.io) mean and why does Kustomize not recognize the kind?

So this api version is correct, although I am still not certain why. In order to get past this error message, I needed to run:
kubectl apply -k dir/.
I hope this helps someone in the future!

If you used apply -f you would see this error. Using -k would definitely work.

You are using kustomize tool (Kustomize is a standalone tool to customize the creation of Kubernetes objects through a file called kustomization.yaml). For applying customization you have to use:
kubectl apply -k foldername(where you store the deploy,service yaml file)

Related

resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system"

while I try to add my k8s cluster in azure vm, is shows error like
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
Here is the output for my command executed
root#kubeadm-master:~# curl --insecure -sfL https://104.211.32.151:8443/v3/import/lqkbhj6gwg9xcb5j8pnqcmxhtdg6928wmb7fj2n9zv95dbxsjq8vn9.yaml | kubectl apply -f -clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver
created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
secret/cattle-credentials-e558be7 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
I was also facing the same issue, so I changed the API version for the cattle-admin-binding from beta to stable as below:
Old value:
apiVersion: rbac.authorization.k8s.io/v1beta1
Changed to:
apiVersion: rbac.authorization.k8s.io/v1
Though I ran into some other issues later, the above error was gone.

Installing a custom grafana datasource through helm / terraform

I would like to install the alertmanager datasource (https://grafana.com/grafana/plugins/camptocamp-prometheus-alertmanager-datasource/) to my kube-prometheus-stack installation which is being built using terraform and the helm provider. I cannot work out how to get the plugin files to the node running grafana though.
Using a modified values.yaml and feeding to helm with -f values.yaml (please ignore values):
additionalDataSources:
- name: Alertmanager
editable: false
type: camptocamp-prometheus-alertmanager-datasource
url: http://localhost:9093
version: 1
access: default
# optionally
basicAuth: false
basicAuthUser:
basicAuthPassword:
I can see the datasource in grafana but the plugin files do not exist.
Alertmanager visible in list of datasources
However, clicking on the datasource I see
Plugin not found, no installed plugin with that ID
Please note that the grafana pod seems to require a restart to pick up datasource changes as well which I would consider needs fixing at a higher level.
It's actually quite simple to get the files there and I cannot believe I overlooked the simplistic solution. Posting this here in the hope others find it useful.
In the kube-prometheus-stack, values.yaml file, just override the grafana section as follows:
grafana:
.
.
.
plugins:
- camptocamp-prometheus-alertmanager-datasource
- grafana-googlesheets-datasource
- doitintl-bigquery-datasource
- redis-datasource
- xginn8-pagerduty-datasource
- marcusolsson-json-datasource
- grafana-kubernetes-app
- yesoreyeram-boomtable-panel
- savantly-heatmap-panel
- bessler-pictureit-panel
- grafana-polystat-panel
- dalvany-image-panel
- michaeldmoore-multistat-panel
additionalDataSources:
- name: Alertmanager
editable: false
type: camptocamp-prometheus-alertmanager-datasource
url: http://prometheus-kube-prometheus-alertmanager.monitoring:9093
version: 1
access: default
# optionally
basicAuth: false
basicAuthUser:
basicAuthPassword:
where the name / type of the plugin can be found on the installation instructions on the Grafana Plugins page
I made some progress by discovering I could get onto the pod running grafana using:
kubectl exec -it --container grafana prometheus-grafana-5d844b67c6-5p46b -- /bin/sh
The one listed in kubectl get pods was the sidecar.
Then I could run:
kubectl exec -it --container grafana prometheus-grafana-5d844b67c6-5p46b -- grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource
which did the required file installation. After deleting and recreating the pod, there is progress
Keen to hear any comments on the approach or better ideas!

Dapr -VaulttokenMountpath Issue

I am trying to execute the Dapr -Secret management using Vault in k8s env.
https://github.com/dapr/quickstarts/tree/master/secretstore
Applied the following component Yaml for vault .
Component yaml:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
name: vaultAddr
value: vault:8270 # Optional. Default: "https://127.0.0.1:8200"
name: skipVerify # Optional. Default: false
value : true
name: vaultTokenMountPath # Required. Path to token file.
value : root/tmp/
Token file is created under root/tmp path and tried to execute the service. I am getting the following errors.
Permission denied error. (even though I have given all the read/write permissions.)
I tried applying permission to the file not able to access. Can anyone please provide solution.
Your YAML did not format well but it looks like your value for vaultTokenMountPath is incomplete. It needs to point to the file not just the folder root/tmp/. I created a file called vault.txt and copied my root token into it. So my path would be root/tmp/vault.txt in your case.
I was able to make it work in WSL2 by pointing to a file (/tmp/token in my case).
I was unable to make it work in kubernetes as I did not find any way to inject file in the DAPR sidecar, opened issue on github for this: https://github.com/dapr/components-contrib/issues/794

Helm - Spark operator examples/spark-pi.yaml does not exist

I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.

Installing Istio in Kubernetes with automatic sidecar injection: istio-inializer.yaml Validation Failure

I'm trying to install Istio with automatic sidecar injection into Kubernetes. My environment consists of three masters and two nodes and was built on Azure using the Azure Container Service marketplace product.
Following the documentation located here, I have so far enabled RBAC and DynamicAdmissionControl. I have accomplished this by modifying /etc/kubernetes/istio-inializer.yaml on the Kubernetes Master by adding the following content outlined in red and then restarting the Kubernetes Master using the Unix command, reboot.
The next step in the documentation is to apply the yaml using kubectl. I assume that the documentation intends for the user to clone the Istio repository and cd into it before this step but that is unmentioned.
git clone https://github.com/istio/istio.git
cd istio
kubectl apply -f install/kubernetes/istio-initializer.yaml
After which the following error occurs:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
error: error validating "install/kubernetes/istio-initializer.yaml": error validating data: found invalid field initializers for v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
If I attempt to execute kubectl apply with the mentioned flag, validate=false, then this error is generated instead:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml --validate=false
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
deployment "istio-initializer" configured
error: unable to recognize "install/kubernetes/istio-initializer.yaml": no matches for admissionregistration.k8s.io/, Kind=InitializerConfiguration
I'm not sure where to go from here. The problem appears to be related to the admissionregistration.k8s.io/v1alpha1 block in the yaml but I'm unsure what specifically is incorrect in this block.
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
Installed version of Kubernetes:
user#hostname:~/istio$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I suspect this is a versioning mismatch. As a follow up question, is it possible to deploy a version of kubernetes >= 1.7.4 to Azure using ACS?
I'm fairly new to working with Kubernetes so if anyone could help I would greatly appreciate it. Thank you for your time.
Seems to be a versioning problem as the alpha feature is supported for k8s version> 1.7 as mentioned here (https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers).
1.7 introduces two alpha features, Initializers and External Admission
Webhooks, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
And it is possible to deploy a version of kubernetes >= 1.7.4 to Azure. Note sure about the deployed version using the portal. But if you use acs-egnine to generate the ARM template, it is possible to deploy a cluster with version 1.7.5.
You can refer here for the procedures https://github.com/Azure/acs-engine. Basically it involves three steps. First, you should create the json file by referring to the clusterDefinition section. To use version 1.7.5, you should specify the attribute "orchestratorRelaease" to be "1.7" and also enable the RBAC by specifying the attribute "enableRbac" to be true. Second, use the acs engine (version >= 0.6.0) to parse the json file to ARM template (azuredeploy.json & azuredeploy.parameters.json should be created). Lastly, use the command "New-AzureRmResourceGroupDeployment" in powershell to deploy the cluster to Azure.
Hope this helps :)

Resources