Serverless.yml: Reference existing environment variable in another - python-3.x

I have a serverless.yml which looks like this
service: my-service
provider:
name: aws
runtime: python3.7
versionFunctions: false
environment:
ACCOUNT_ID: "${file(./serverless.env.yml):${self:provider.stage}.account_id}"
ANOTHER_VARIABLE: "some text ${ACCOUNT_ID} some other text"
Here, I want to reference the existing environment ACCOUNT_ID in ANOTHER_VARIABLE. The ${ACCOUNT_ID} doesn't work. I also tried to look at the serverless documentation but I'm not able to find anything related to that.

You can simply use ${self:provider.environment.ACCOUNT_ID}.
service: my-service
provider:
name: aws
runtime: python3.7
versionFunctions: false
environment:
ACCOUNT_ID: "${file(./serverless.env.yml):${self:provider.stage}.account_id}"
ANOTHER_VARIABLE: "some text ${self:provider.environment.ACCOUNT_ID} some other text"

Related

Access environment variables set by configMapRef in kubernetes pod

I have a set of environment variables in my deployment using EnvFrom and configMapRef. The environment variables held in these configMaps were set by kustomize originally from json files.
spec.template.spec.containers[0].
envFrom:
- secretRef:
name: eventstore-login
- configMapRef:
name: environment
- configMapRef:
name: eventstore-connection
- configMapRef:
name: graylog-connection
- configMapRef:
name: keycloak
- configMapRef:
name: database
The issue is that it's not possible for me to access the specific environment variables directly.
Here is the result of running printenv in the pod:
...
eventstore-login={
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
}
evironment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
eventstore={
"EVENT_STORE_HOST": "eventstore-cluster",
"EVENT_STORE_PORT": "1113"
}
graylog={
"GRAYLOG_HOST":"",
"GRAYLOG_SERVICE_PORT_GELF_TCP":""
}
...
This means that from my nodejs app I need to do something like this
> process.env.graylog
'{\n "GRAYLOG_HOST":"",\n "GRAYLOG_SERVICE_PORT_GELF_TCP":""\n}\n'
This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:
process.env.GRAYLOG_HOST
To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:
env:
- name: NODE_ENV
value: dev
- name: EVENT_STORE_HOST
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_HOST
- name: EVENT_STORE_PORT
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_PORT
- name: KEYCLOAK_REALM_PUBLIC_KEY
valueFrom:
configMapKeyRef:
name: keycloak-local
key: KEYCLOAK_REALM_PUBLIC_KEY
Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.
Short answer:
You will need to define variables explicitly or change configmaps so they have 1 environment variable = 1 value structure, this way you will be able to refer to them using envFrom. E.g.:
"apiVersion": "v1",
"data": {
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
},
"kind": "ConfigMap",
More details
Configmaps are key-value pairs that means for one key there's only one value, configmaps can get string as data, but they can't work with map.
I tried edited manually the configmap to confirm the above and got following:
invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"
This is the reason why environment comes up as one string instead of structure.
For example this is how configmap.json looks:
$ kubectl describe cm test2
Name: test2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.json:
----
environment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
And this is how it's stored in kubernetes:
$ kubectl get cm test2 -o json
{
"apiVersion": "v1",
"data": {
"test.json": "evironment={\n \"LOTUS_ENV\":\"dev\",\n \"DEV_ENV\":\"dev\"\n}\n"
},
In other words observed behaviour is expected.
Useful links:
ConfigMaps
Configure a Pod to Use a ConfigMap

Environment Variables with Serverless and AWS Lambda

I am learning serverless framework and I'm making a simple login system.
Here is my serverless.yml file
service: lms-auth
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: $(file(../env.yml):MONOGDB_URI)
JWT_SECRET: $(file(../env.yml):JWT_SECRET)
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
As you can see, I have two environment variables and both of them are referencing to a different file in the same root folder.
Here is that env.yml file
MONOGDB_URI: <MY_MONGO_DB_URI>
JWT_SECRET: LmS_JWt_secREt_auth_PasSWoRds
When I do sls deploy, I see that both the variables are logging as null. The environment variables aren't sent to lambda.
How can I fix this?
Also, currently I'm using this method and adding the env.yml to .gitignore and saving the values. Is there any other efficient way of hiding sensitive data?
I would do something like this to help you out with the syntax
service: lms-auth
custom: ${file(env.yml)}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: ${self:custom.mongodb_uri}
JWT_SECRET: ${self:custom.jwt_secret}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
Then in your env.yml you can do
mongodb_uri: MY_MONGO_DB_URI
jwt_secret: LmS_JWt_secREt_auth_PasSWoRds
Enviroment variables
1. Add useDotenv:true to your .yml file 2.Add your variables like this -> ${env:VARIABLE_NAME}3.Create a file called .env.dev and write the variables (You can add .env.prod but you have to change the stage inside your .yml file ) Example :
service: lms-auth
useDotenv: true
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
environment:
MONGODB_URI: ${env:MONOGDB_URI}
JWT_SECRET: ${env:JWT_SECRET}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
.env.dev
MONOGDB_URI = The URI Value
JWT_SECRET = The JWT Scret Value
I ended up solving it. I had set up my Dynamo DB in AWS us-west region. reinitialized in US-East-2, and reset the region under 'provider' within the .yml file.

Managing multiple Pods with one Deploy

Good afternoon, I need help defining a structure of my production cluster, i want something like.
1 Deployment that controlled the pods
multiple PODS (one pod per-customer)
multiple services (one service-per pod)
but how will I do this structure if for each POD I have env vars that will connect to the customer database, like that
env:
- name: dbuser
value: "svc_iafox_test#***"
- name: dbpassword
value: "****"
- name: dbname
value: "ts-demo1"
- name: dbconnectstring
value: "jdbc:sqlserver://***-test.database.windows.net:1433;database=$(dbname);user=$(dbuser);password=$(dbpassword);encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
so for each pod I will have to change these env vars ... anyway, what is the best way for me to do this??
you could use configmap to achieve that:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
restartPolicy: Never
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#use-configmap-defined-environment-variables-in-pod-commands
ps. I dont think 1 deployment per pod makes sense. 1 deployment per customer does. I dont think you understand exactly what a deployment does: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Serverless variable from external file nested property

I have serverless yml and a config file
config file
app: {
port: 3000,
db: {
connectionString: 'xxxxx'
},
lambdaDeploy:{
stage : "DEV",
region : "es-west-1"
}
Trying to use these variables in yml like below
yml
provider:
name: aws
runtime: nodejs6.10
stage: ${file(./appconfiguration.json).app.stage}
region: ${file(./appconfiguration.json).app.region}
But its reading and taking default
Please advise.
Thanks
The syntax used here is not correct.
stage: ${file(./appconfiguration.json).app.stage}
Use colon instead:
stage: ${file(./appconfiguration.json):app.stage}
More details here: https://www.serverless.com/framework/docs/providers/aws/guide/variables/#reference-variables-in-other-files

LambdaFunction - Value of property Variables must be an object with String (or simple type) properties

I am using serverless to deploy my Lambda based application. It was deploying just fine, and then it stopped for some reason. I paired down the entire package to the serverless.yml below and one function in the handler - but I keep getting this error:
Serverless Error ---------------------------------------
An error occurred: TestLambdaFunction - Value of property Variables must be an object with String (or simple type) properties.
Stack Trace --------------------------------------------
Here is the serverless.yml
# serverless.yml
service: some-api
provider:
name: aws
runtime: nodejs6.10
stage: prod
region: us-east-1
iamRoleStatements:
$ref: ./user-policy.json
environment:
config:
region: us-east-1
plugins:
- serverless-local-dev-server
- serverless-dynamodb-local
- serverless-step-functions
package:
exclude:
- .gitignore
- package.json
- README.md
- .git
- ./**.test.js
functions:
test:
handler: handler.test
events:
- http: GET test
resources:
Outputs:
NewOutput:
Description: Description for the output
Value: Some output value
Test Lambda Function in Package
#handler.test
module.exports.test = (event, context, callback) => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
message: 'sadfasd',
input: event
})
})
}
Turns out, this issue does not have any relationship to the Lambda function. Here is the issue that caused the error.
This does NOT work:
environment:
config:
region: us-east-1
This DOES work:
environment:
region: us-east-1
Simply put, I don't think you can have more than one level in your yaml environment variables.
Even if you try sls print as a sanity check, this issue will not pop up. Only in sls deploy.
You have been warned, and hopefully saved!
Other thing that might cause this kind of error is using invalid yaml syntax.
It's easy to get confused about this.
Valid syntax for environment variables
environment:
key: value
Invalid syntax for environment variables
environment:
- key: value
Notice little dash in the code below?
In yaml syntax - means array and therefore, code below is interpreted as array, not object.
So that's why error tells "Value of property Variables must be an object with String (or simple type) properties."
This can be easily fixed by removing - in front of all keys.

Resources