I'm currently building a deployment for Kubernetes through Helm. However, one of my values that I have to pass is an Endpoint that contain the following characters:
Endpoint=https://test.io;Id=001;Secret=test_test_test
The problem is that if I passed the following value:
test01:
- name: test
value: Endpoint=https://test.io;Id=001;Secret=test_test_test
The pod will never get created since is not getting and passing the value. If I add the following with single quotes is telling me that the pod is not ready.
test01:
- name: test
value: 'Endpoint=https://test.io;Id=001;Secret=test_test_test'
If I passed the value with single quotes it will tell me that the pod got created, but still the pod and namespace do not show up on in the AKS Cluster. However, if I run the same environmental variables on docker, I will see that all of these variables will be applicable to my app, and I will be able to see my app running as expected.
How can I set up the env variables inside of the Values and how can I run a command from the terminal to set & run multiple variables at the same time? & Does anyone has any other way to do this?
Related
When deploying release using Azure Dev Ops, the variable value is preserved between two stages instead of being overwritten as expected.
I am using azure dev ops to deploy data factory with its resources to different environments
Test Environment
UAT Environment
I have 2 variable groups defined in "Library" respectively for each stage of deployment
Test-config and
Uat-config
Both of those have a variable by name varDataFactory - to keep the data factory name
When linking the variable group in the pipeline "Variables" tab, I specify the stage at which i am expecting it to execute against.
So, the variable varDataFactory is expected to have different value at each stage.
Basically the first stage executes, creating the Test Data Factory, but then when UAT stage deploys, it "sees" the "varDataFactory" with its old value - the one for the "test".
And I do not know why, and what to do about it. Especially, since I have another pair of variable groups for Key vaults (2 different vaults for 2 environments) and those get in just file. Please, help !!
As you have 2 different configuration groups for 2 different stages, then I don't understand what do you mean by overwriting.
I tried this with a simple command line task in release pipeline with below configurations:
Variable Group: test-config, Variable name: varDF, value: testdatafactory
Variable Group: uat-config, Variable name: varDF, value: uatdatafactory
Pipeline variable Configuration:
Below are the outputs for echo$(varDF) command in test and UAT stages:
It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.
I deployed gcp-spark operator on k8s. Its working perfectly fine. Able to run scala and python jobs with no issues.
But, I am unable to create volume mounts on my pods. Unable to use local fs. Looks like spark-operator should be enabled with webhooks for it to work. Going by here.
There was an spark-operator with webhooks yaml here, but the name is different to the deployment coming through the operator hub. I updated the names to the best of my knowledge and tried to apply the deployment. But ran into the below issue.
kubectl apply -f spark-operator-with-webhook.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/spark-operator configured
service/spark-webhook unchanged
The Job "spark-operator-init" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVers......int(nil)}}: field is immutable
Is there an easy way of enabling webhooks on spark-operator? I want to be able to mount local fs on the sparkapplication. Please assist.
I purged the init object and redeployed. The manifest was successfully applied.
I recently starting working on serverless architecture. Here is example of serverless.xml for the same.
test:
name: test
handler: handler.lambda_handler
timeout: 6
environment:
APP_ID: ${ssm:/path/to/ssm/test~true}
Now when I am trying to run serverless offline command then it complains about ssm variable.
Following is the error that coming on console.
I want to run everything on my locally machine for development. Can someone help on this how I can solve this problem.
ServerlessError: Trying to populate non string value into a string for variable ${ssm:/path/to/ssm/test~true}. Please make sure the value of the property is a string.
at Variables.populateVariable (C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:464:13)
at Variables.renderMatches (C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:386:21)
at C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:406:29
From previous event:
you can solve this by adding the plugin:
https://github.com/janders223/serverless-offline-ssm
if you're feeling more adventurous you can also use localstack https://github.com/localstack/localstack
note that free version does not support everything
I'm new to AWS, and I'm trying to deploy my local web app on AWS using ECR and ECS, but got stuck when running a cluster, it throws the error about the PRISMA_CONFIG environment variable in prisma container.
In my local environment, i'm using docker to build the app using nodejs, prisma and mongodb, it's working fine.
Now on ECS, i created a task definition and for prisma container, i tried to copy the yml config from my local docker-compose.yml file to make it work.
There is field called "ENVIRONMENT", I've inputted the value in the Environment variables, it's just not working and throw the error while the cluster was running, then the task Stopped.
the yml is in multiple lines, but the input box supports string only
the variable key is PRISMA_CONFIG
and the following are the values that i've already tried
| port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
| \nport: 4466 \ndatabases: \ndefault: \nconnector: mongo \nuri: mongodb://prisma:prisma#mongo
|\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
and the errors
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.lang.RuntimeException: No valid Prisma config could be loaded.
expected a comment or a line break, but found p(112)
expected chomping or indentation indicators, but found \(92)
i expected that all containers will run without errors, but actual results are the container stopped after running for a minute.
Please help for this.
or suggest other way to deploy to AWS?
THANK YOU VERY MUCH.
I've been looking for a similar solution to load the prisma config without the multiline string.
There are repositories that load the prisma environment variables separately without a prisma config:
Check out this repo for example:
https://github.com/akoenig/prisma-docker-compose/blob/master/.prisma.env
Here akoenig uses the following env variables using a env_file. So, I'm assuming you can just pass in these environment variables separately to achieve what prisma is looking for.
# CONTENTS OF env_file
PORT=4466
SQL_CLIENT_HOST_CLIENT1=database
SQL_CLIENT_HOST_READONLY_CLIENT1=database
SQL_CLIENT_HOST=database
SQL_CLIENT_PORT=3306
SQL_CLIENT_USER=root
SQL_CLIENT_PASSWORD=prisma
SQL_CLIENT_CONNECTION_LIMIT=10
SQL_INTERNAL_HOST=database
SQL_INTERNAL_PORT=3306
SQL_INTERNAL_USER=root
SQL_INTERNAL_PASSWORD=prisma
SQL_INTERNAL_DATABASE=graphcool
CLUSTER_ADDRESS=http://prisma:4466
SQL_INTERNAL_CONNECTION_LIMIT=10
SCHEMA_MANAGER_SECRET=graphcool
SCHEMA_MANAGER_ENDPOINT=http://prisma:4466/cluster/schema
#CLUSTER_PUBLIC_KEY=
BUGSNAG_API_KEY=""
ENABLE_METRICS=0
JAVA_OPTS=-Xmx1G
This is for a mySQL database. You would need to tailor this to suit your values. But in theory you should just be able to pass these variables one by one into single variables in AWS's GUI.
I've also asked this question on the Prisma Slack channel and am waiting to see if they have other suggestions: https://prisma.slack.com/archives/CA491RJH0/p1569689413383000
Let me know how it goes.
Not and expert here but, have you set up an environment variable PRISMA_API_MANAGEMENT_SECRET you would have defined the secret when you configured your fargate instance.
have a look at the following artical
https://www.prisma.io/tutorials/deploy-prisma-to-aws-fargate-ct14