Dapr -VaulttokenMountpath Issue - dapr

I am trying to execute the Dapr -Secret management using Vault in k8s env.
https://github.com/dapr/quickstarts/tree/master/secretstore
Applied the following component Yaml for vault .
Component yaml:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
name: vaultAddr
value: vault:8270 # Optional. Default: "https://127.0.0.1:8200"
name: skipVerify # Optional. Default: false
value : true
name: vaultTokenMountPath # Required. Path to token file.
value : root/tmp/
Token file is created under root/tmp path and tried to execute the service. I am getting the following errors.
Permission denied error. (even though I have given all the read/write permissions.)
I tried applying permission to the file not able to access. Can anyone please provide solution.

Your YAML did not format well but it looks like your value for vaultTokenMountPath is incomplete. It needs to point to the file not just the folder root/tmp/. I created a file called vault.txt and copied my root token into it. So my path would be root/tmp/vault.txt in your case.

I was able to make it work in WSL2 by pointing to a file (/tmp/token in my case).
I was unable to make it work in kubernetes as I did not find any way to inject file in the DAPR sidecar, opened issue on github for this: https://github.com/dapr/components-contrib/issues/794

Related

Node container unable to locate Hashicorp Vault secrets file on startup on AWS EKS 1.24

We have a small collection of Kubernetes pods which run react/next.js UIs in a node 16 alpine container (node:16.18.1-alpine3.15 to be precise). All of this runs in AWS EKS 1.23. We make use of annotations on these pods in order to inject secrets from Hashicorp Vault at start up. The annotations pull the desired secrets from Vault and write these to a file on the pod. Example of said annotations below :
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/role: "onejourney-ui"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/onejourney-ui"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/onejourney-ui" -}}
export AUTH0_CLIENT_ID="{{ .Data.data.auth0_client_id }}"
export SENTRY_DSN="{{ .Data.data.sentry_admin_dsn }}"
{{- end }}
When the pod starts up, we source this file (which is created by default at /vault/secrets/config) to set environment variables and then delete the file. We do that with the following pod arguments in our helm chart :
node:
args:
- /bin/sh
- -c
- source /vault/secrets/config; rm -rf /vault/secrets/config; yarn start-admin;
We recently upgraded some of AWS EKS clusters from 1.23 to 1.24. After doing so, we noted that our node applications were failing to start and entering a crash loop. Looking in the logs of these containers, the problem seemed to be that the pod was unable to locate the secrets file anymore.
Interestingly, the Vault init container completed successfully and shows that the file was successfully created...
Out of curiosity, I removed the node args to source the file which allowed the container to start successfully, but I found when execing into the pod, the file WAS infact present and had the content I was expecting. The file also had the correct owner and permissions as we see in a good working instance in EKS 1.23.
We have other containers (php-fpm) which consume secrets in the same manner however these were not affected on 1.24, only node containers were affected. There were no namespace, pod or deployment annotations I saw added which would have been a possible cause. After rolling the cluster back down to EKS 1.23, the deployment worked as expected.
I'm left scratching my head as to why the pod is unable to source that file on 1.24. Any suggestions on what to check or a possible cause would be greatly appreciated.

resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system"

while I try to add my k8s cluster in azure vm, is shows error like
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
Here is the output for my command executed
root#kubeadm-master:~# curl --insecure -sfL https://104.211.32.151:8443/v3/import/lqkbhj6gwg9xcb5j8pnqcmxhtdg6928wmb7fj2n9zv95dbxsjq8vn9.yaml | kubectl apply -f -clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver
created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
secret/cattle-credentials-e558be7 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
I was also facing the same issue, so I changed the API version for the cattle-admin-binding from beta to stable as below:
Old value:
apiVersion: rbac.authorization.k8s.io/v1beta1
Changed to:
apiVersion: rbac.authorization.k8s.io/v1
Though I ran into some other issues later, the above error was gone.

How to create an elasticbeanstalk environment through boto3 setting a keypair name

If you uses the AWS console or even the command line, you won't get any issue in setting a default keypair to your Elasticbeanstalk environment.
But you do if using boto3.
Surprisingly, there's no any single mention about setting a keypair in the official boto3 documentation for elasticbeanstalk: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html.
Tried also to create a zip file containing the most basic files to make a simple website works. And supposedly, I can set a keypair name in the .elasticbeanstalk/config.yml". I did in this way:
branch-defaults:
default:
environment: app10-env
group_suffix: null
global:
application_name: app10
branch: null
default_ec2_keyname: main4
default_platform: PHP 7.4 running on 64bit Amazon Linux 2
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
repository: null
sc: null
workspace_type: null
Yes, the "main4" exists in my AWS account. But creating an environment to my application with a zip containing it, it seems that it have no effect at all. After my environment has sucessfully deployed, I can check afterwards through console and see that have no keypair setted to environment. I need to go to a further step on console to set the keypair and await a new environment deployiment to perform the update.
Is there a real issue with the boto3 elasticbeanstalk when dealing with environment keypairs or I am doing something wrong?
I would set the OptionSettings when calling create_environment or include the keyname in the .ebextensions. Boto3 is not reading the EB CLI default config you are using i guess.
Refs
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html#ElasticBeanstalk.Client.create_environment
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-configuration-methods-before.html
Option to set
Namespace: aws:autoscaling:launchconfiguration
Option Names: IamInstanceProfile, EC2KeyName, InstanceType
The response of #f7o is not accurate, but helped to solve the problem.
There's no option for setting an keypair using "create_environment" command from boto3 client. Tried to use a "EC2KeyName", but it returned an exception of invalid value.
But using the "ebextensions" do the work. If someone else are interested in do the same task that I am, so everything that is needed to do is create a folder called ".ebextensions" with a file called "customkey.config" (the file name can be anything, but must be suffixed with .config), and with the following content:
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: EC2KeyName
value: <your_keypair_name>

Kustomize: no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"

I am new to Kustomize and am getting the following error:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
but I am using the boilerplate kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
Question: What does the group name (kustomize.config.k8s.io) mean and why does Kustomize not recognize the kind?
So this api version is correct, although I am still not certain why. In order to get past this error message, I needed to run:
kubectl apply -k dir/.
I hope this helps someone in the future!
If you used apply -f you would see this error. Using -k would definitely work.
You are using kustomize tool (Kustomize is a standalone tool to customize the creation of Kubernetes objects through a file called kustomization.yaml). For applying customization you have to use:
kubectl apply -k foldername(where you store the deploy,service yaml file)

Azure Kubernetes - No connection to Server

when I execute the following PowerShell command:
.\kubectl get nodes
I get no nodes in response. I noticed that the config file from kubectl is empty too:
apiVersion: v1
clusters:
- cluster:
server: ""
name: cl-kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
When I enter the server address at the config file, I get the message that the connection was refused. I suspect that it is due to missing certificates. During another installation this (apparently) following information was created automatically, which is now missing:
certificate-authority-data,
contexts - cluster,
contexts - user,
current context,
users - name,
client-certificate-data,
client-key-data,
token,
Could that be it? If so, where do I get this information?
Many thanks for the help
You need to use the Azure CLI first to get the credentials. Run
az aks get-credentials
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials

Resources