Dapr component is not able to find local secret file - dapr

My newly created Dapr component is not able to find the local secret file.
I am getting following error:
FATA[0005] process component my-secret-store error: missing local secrets file in metadata app_id=myapp instance=Prithvipals-MacBook-Pro.local scope=dapr.runtime type=log ver=1.5.1
I have created component file and secret file following tree structure:
.
├── my-components
│   └── localSecretStore.yaml
└── mysecrets.json
1 directory, 2 files
Below is the content of localSecretStore.yaml file:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-secret-store
namespace: default
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretFile
value: mysecrets.json
- name: nestedSeparator
value: ":"
below is the content of mysecrets.json file:
{
"my-secret" : "I'm Batman"
}
I am following this doc. As mentioned in this doc the secret file path should be a relative path from where I am running dapr component. I am running dapr command from the parent folder of my-component so I kept file name as relative path.
I am running following command:
dapr run --app-id myapp --dapr-http-port 3500 --components-path ./my-components

The value of "secretFile" key should contain the absolute path of the mysecrets.json file, or it can be the path relative to the folder from where you are running the dapr run command.

Related

Does Kustomize require you specify an entire resource to change one value?

I'd understood that Kustomize would be the solution to my Kubernetes configuration management needs where, for example, if I want maxReplicas for a resource to be 3 on my dev and test environments but 7 in production, I could do that easily. I imagined there'd be a base file, and then an overlay file that would respecify just the values needing changing. My goal is that if I have multiple configurations (configurations? nodes? clusters? I'm still having trouble with K8s terminology. I mean dev, test, prod.), any time a value common to all of them needed changing, I could change it in one place, the base configuration, instead of in three different config files. Essentially, just as in programming, I want to factor out what's common to all the environments.
But I'm looking at https://www.densify.com/kubernetes-tools/kustomize/ and getting a different impression. Here, the dev version of an hpa.yml file is only meant to change the values of maxReplicas and averageUtilization. So I'd thought the overlay would look as follows, in the same way that, in a .NET Core application, appsettings.dev.json only needs to specify the settings from appsettings.json that it's overriding:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: frontend-deployment-hpa
spec:
maxReplicas: 2
target:
averageUtilization: 90
instead of the whole definition that's in the example given. So, if I started with all instances having minReplicas = 1 and I wanted to change it to 3 for all of them, I'd have make that change in every overlay instead of just in the base.
Is this correct? If so, is there a tool that will allow configuration management to work as I'm looking to have it work?
Is this correct?
No.
Kustomize does not require to specify the entire resource in order to change a single value; the entire point of Kustomize is its ability to transform manifests through patches and other mechanisms to produce the desired output.
For example, assume the following layout:
.
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
├── dev
│   └── kustomization.yaml
└── prod
└── kustomization.yaml
In base/deployment.yaml we have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
template:
spec:
containers:
- name: web
image: docker.io/traefik/whoami:latest
And in base/kustomization.yaml we have:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: kustomize-demo
resources:
- deployment.yaml
If in my dev environment I want to keep replicas: 1, I would create overlays/dev/kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
This simply includes the resources from base verbatim.
On the other hand, if in my production environment I want to run with three replicas, I would patch the Deployment resource in overlays/prod/kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 3
This type of patch is called a "strategic merge patch"; you can also apply changes using "JSONPatch" patches. That might look like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- target:
kind: Deployment
name: demo
patch: |
- op: replace
path: /spec/replicas
value: 3
The kustomize documentation has a variety of examples of patching and other transformations. For example, the commonLabels directive I show in base/kustomization.yaml uses the labels transformer to produce this output:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kustomize-demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: kustomize-demo
template:
metadata:
labels:
app: kustomize-demo
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: web
Notice how the labels defined in commonLabels have been applied both to:
The top-level /metadata/labels element
The /spec/selector/matchLabels element
The /spec/template/metdata/labels element

Prisma Query engine library for current platform "debian-openssl-1.1.x" could not be found

I have a NodeJS/NestJS project consisting of multiple microservices. I've deployed my postgres database and also a microservice pod which interact with the database, on an aws kubernetes cluster. I'm using Prisma as ORM and when I exec into pod and run
npx prisma generate
the output is as below:
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
✔ Generated Prisma Client (4.6.1 | library) to ./node_modules/#prisma/client in 1.32s
You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
import { PrismaClient } from '#prisma/client'
const prisma = new PrismaClient()
but when I call an API to create an object in my postgres db by the prisma ORM, I get the error below in the microservice pod:
error: PrismaClientInitializationError:
Invalid `prisma.session.create()` invocation:
Query engine library for current platform "debian-openssl-1.1.x" could not be found.
You incorrectly pinned it to debian-openssl-1.1.x
This probably happens, because you built Prisma Client on a different platform.
(Prisma Client looked in "/usr/src/app/node_modules/#prisma/client/runtime/libquery_engine-debian-openssl-1.1.x.so.node")
Searched Locations:
/usr/src/app/node_modules/.prisma/client
C:\Users\MOHSEN\Desktop\cc-g\cc-gateway\cc-gateway\db-manager\node_modules\#prisma\client
/usr/src/app/node_modules/#prisma/client
/usr/src/app/node_modules/.prisma/client
/usr/src/app/node_modules/.prisma/client
/tmp/prisma-engines
/usr/src/app/node_modules/.prisma/client
To solve this problem, add the platform "debian-openssl-1.1.x" to the "binaryTargets" attribute in the "generator" block in the "schema.prisma" file:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native"]
}
Then run "prisma generate" for your changes to take effect.
Read more about deploying Prisma Client: https://pris.ly/d/client-generator
at RequestHandler.handleRequestError (/usr/src/app/node_modules/#prisma/client/runtime/index.js:34316:13)
at /usr/src/app/node_modules/#prisma/client/runtime/index.js:34737:25
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async PrismaService._executeRequest (/usr/src/app/node_modules/#prisma/client/runtime/index.js:35301:22)
at async PrismaService._request (/usr/src/app/node_modules/#prisma/client/runtime/index.js:35273:16)
at async AppService.createSession (/usr/src/app/dist/app.service.js:28:28) {
clientVersion: '4.6.1',
errorCode: undefined
}
Also this is generator client in schema.prisma file:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl", "debian-openssl-1.1.x"]
}
Before that I had the same problem but the error was mentioning about "linux-musl" like below:
Query engine library for current platform "linux-musl" could not be found.
although I was using linux-musl in the binary target in generator block.
but after lots of research I found that I should not use alpine node in my docker file and instead I used buster and my docker file is as below:
FROM node:buster As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build db-manager
FROM node:buster as production
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
I think the problem is that, the prisma query engine could not be found because it is searching wrong locations for platform specific query engine. So, I tried to provide the locations that query engine files are located in my pod, as ENV variables in the deployment file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-manager
spec:
replicas: 3
selector:
matchLabels:
app: db-manager
template:
metadata:
labels:
app: db-manager
spec:
containers:
- name: db-manager
image: image-name
ports:
- containerPort: 3002
env:
- name: PORT
value: "3002"
- name: DATABASE_URL
value: db url
- name: KAFKA_HOST
value: kafka url
- name: PRISMA_MIGRATION_ENGINE_BINARY
value: /usr/src/app/node_modules/#prisma/engines/migration-engine-debian-openssl-1.1.x
- name: PRISMA_INTROSPECTION_ENGINE_BINARY
value: /usr/src/app/node_modules/#prisma/engines/introspection-engine-debian-openssl-1.1.x
- name: PRISMA_QUERY_ENGINE_BINARY
value: /usr/src/app/node_modules/#prisma/engines/libquery_engine-debian-openssl-1.1.x.so.node
- name: PRISMA_FMT_BINARY
value: /usr/src/app/node_modules/#prisma/engines/prisma-fmt-debian-openssl-1.1.x
but it doesn't work and the error still happens when prisma try to execute a create query. I would be very appreciated if anyone could help me. Am I doing something wrong or this is a bug in prisma when used in aws deployment?
thanks for any comments or guides about that.
Try to update to 4.8.0 Prisma version, and set the binaryTargets property in schema.prisma file to:
(...)
binaryTargets = [
"native",
"debian-openssl-1.1.x",
"debian-openssl-3.0.x",
"linux-musl",
"linux-musl-openssl-3.0.x"
]
(...)
Don't forget to run yarn prisma generate

Create kubernetes env var secrets from .env file

I have a nodejs application which stores variables in environment variables.
I'm using the dotenv module, so I have a .env file that looks like :
VAR1=value1
VAR2=something_else
I'm currently setting up a BitBucket Pipeline to auto deploy this to a Kubernetes cluster.
I'm not very familiar with kubernetes secrets, though I'm reading up on them.
I'm wondering :
Is there an easy way to send to a Docker-container / kubernetes-deployment all of the environment variables I have defined in my .env file so they are available in the pods my app is running in ?
I'm hoping for an example secrets.yml file or similar which takes everything from .env and makes in into environment variables in the container. But it could also be done in the BitBucket pipeline level, or at the Docker container level .. I'm not sure ...
Step 1: Create a k8s secret with your .env file:
# kubectl create secret generic <secret-name> --from-env-file=<path-to-env-file>
$ kubectl create secret generic my-env-list --from-env-file=.env
secret/my-env-list created
Step 2: Varify secret:
$ kubectl get secret my-env-list -o yaml
apiVersion: v1
data:
VAR1: dmFsdWUx
VAR2: c29tZXRoaW5nX2Vsc2U=
kind: Secret
metadata:
name: my-env-list
namespace: default
type: Opaque
Step 3: Add env to your pod's container:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: my-env-list # <---- here
restartPolicy: Never
Step 4: Run the pod and check if the env exist or not:
$ kubectl apply -f pod.yaml
pod/demo-pod created
$ kubectl logs -f demo-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=demo-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
VAR1=value1 # <------------------------------------------------------here
VAR2=something_else # <-----------------------------------------------here
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
You can also use the kustomize operator to create a secret from file as follows:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: kust-example
generatorOptions:
# Prevents adding hash at the end of the secret name
disableNameSuffixHash: true
secretGenerator:
- name: your-secret
namespace: default
envs:
- path/secret.env
Then you just have to run `kubectl apply -k dir
You can also use this to achieve the same result as using Kustomization but with more control to automate your job
https://github.com/juliosmelo/dotenv2k8s

kubectl apply -f k8s: is unable to recognize service and deployment and has no matches for kind "Service" in version "v1"

I have kubernetes running on OVH without a problem. But i recently reinstalled the build server because of other issues and setup everything but when trying to apply files it gives this horrable error.. did i miss something? and what does this error really mean?
+ kubectl apply -f k8s
unable to recognize "k8s/driver-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-mysql-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-persistent-volume-claim.yaml": no matches for kind "PersistentVolumeClaim" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
I tried all previous answes on SO but none worked out for me. I don't think that i really need it, "correct me if i am wrong on that". I really would like to get some help with this.
I have installed kubectl and i got a config file that i use.
And when i execute the kubectl get pods command i am getting the pods that where deployed before
These are some of the yml files
k8s/driver-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: driver-cluster-ip-service
spec:
type: ClusterIP
selector:
component: driver-service
ports:
- port: 3000
targetPort: 8080
k8s/driver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: driver-deployment
spec:
replicas: 1
selector:
matchLabels:
component: driver-service
template:
metadata:
labels:
component: driver-service
spec:
containers:
- name: driver
image: repo.taxi.com/driver-service
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: taxiregistry
dockerfile
FROM maven:3.6.0-jdk-8-slim AS build
COPY . /home/app/
RUN rm /home/app/controllers/src/main/resources/application.properties
RUN mv /home/app/controllers/src/main/resources/application-kubernetes.properties /home/app/controllers/src/main/resources/application.properties
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:8-jre-slim
COPY --from=build /home/app/controllers/target/controllers-1.0.jar /usr/local/lib/driver-1.0.0.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/driver-1.0.0.jar"]
kubectl get pods command
kubectl api-versions
solution found
I had to place the binary file in a .kube folder which should be placed in the root directory
In my case i had to manually create the .kube folder in the root directory first.
After that I had my env variable set to that folder and placed my config file with my settings in there as well
Then i had to share the folder with the jenkins user and applied rights to the jenkins group
Jenkins was not up to date, so I had to restart the jenkins server.
And it worked like a charm!
Keep in mind to restart the jenkins server so that the changes you make will take affect on jenkins.

Elastic Beanstalk: Can't read file pulled from S3 within node.js application

I am currently using an .ebextensions file to initialize my system and pull some .pem files from my S3 bucket. However, I constantly get access denied errors when trying to read this file within my node.js application. I've confirmed the contents of the file pulled from S3 are correct.
Is there an issue my configuration file?
files:
"/home/ec2-user/certificates/cert.pem":
mode: "000777"
owner: nodejs
group: users
source: "https://s3-us-west-2.amazonaws.com/myBucket/folder/cert.pem"
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myBucket
Error given by node.js:
Error: EACCES, permission denied '/home/ec2-user/certificates/cert.pem'
Your node.js application user, nodejs, is not allowed to download a file into the ec2-user home directory. You can instead download the file to the /tmp directory, and then move it using a container command. The container commands are executed from the root of the staged application right before it is deployed.
files:
"/tmp/cert.pem":
mode: "000777"
owner: nodejs
group: users
source: "https://s3-us-west-2.amazonaws.com/myBucket/folder/cert.pem"
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myBucket
container_commands:
file_transfer_1:
command: "mkdir -p certificates"
file_transfer_2:
command: "mv /tmp/cert.pem certificates/."

Resources