Serverless variable from external file nested property - node.js

I have serverless yml and a config file
config file
app: {
port: 3000,
db: {
connectionString: 'xxxxx'
},
lambdaDeploy:{
stage : "DEV",
region : "es-west-1"
}
Trying to use these variables in yml like below
yml
provider:
name: aws
runtime: nodejs6.10
stage: ${file(./appconfiguration.json).app.stage}
region: ${file(./appconfiguration.json).app.region}
But its reading and taking default
Please advise.
Thanks

The syntax used here is not correct.
stage: ${file(./appconfiguration.json).app.stage}
Use colon instead:
stage: ${file(./appconfiguration.json):app.stage}
More details here: https://www.serverless.com/framework/docs/providers/aws/guide/variables/#reference-variables-in-other-files

Related

How to iterate all the variables from a variables template file in azure pipeline?

test_env_template.yml
variables:
- name: DB_HOSTNAME
value: 10.123.56.222
- name: DB_PORTNUMBER
value: 1521
- name: USERNAME
value: TEST
- name: PASSWORD
value: TEST
- name: SCHEMANAME
value: SCHEMA
- name: ACTIVEMQNAME
value: 10.123.56.223
- name: ACTIVEMQPORT
value: 8161
and many more variables in the list.
I wanted to iterate through all the variables in the test_env_template.yml using a loop to replace the values in a file, Is there a way to do that rather than calling each values separately like ${{ variables.ACTIVEMQNAME}} as the no. of variables in the template is dynamic.
In short no. There is no easy way to get your azure pipeline variables specific to tamplet variable. You can get env variables, but there you will get regular env variables and pipeline variables mapped to env variables.
You can get it via env | sort but I'm pretty sure that this is not waht you want.
You can't display variables specific to template but you can get all pipeline variables in this way:
steps:
- pwsh:
Write-Host "${{ convertToJson(variables) }}"
and then you will get
{
system: build,
system.hosttype: build,
system.servertype: Hosted,
system.culture: en-US,
system.collectionId: be1a2b52-5ed1-4713-8508-ed226307f634,
system.collectionUri: https://dev.azure.com/thecodemanual/,
system.teamFoundationCollectionUri: https://dev.azure.com/thecodemanual/,
system.taskDefinitionsUri: https://dev.azure.com/thecodemanual/,
system.pipelineStartTime: 2021-09-21 08:06:07+00:00,
system.teamProject: DevOps Manual,
system.teamProjectId: 4fa6b279-3db9-4cb0-aab8-e06c2ad550b2,
system.definitionId: 275,
build.definitionName: kmadof.devops-manual 123 ,
build.definitionVersion: 1,
build.queuedBy: Krzysztof Madej,
build.queuedById: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedFor: Krzysztof Madej,
build.requestedForId: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedForEmail: krzysztof.madej#hotmail.com,
build.sourceVersion: 583a276cd9a0f5bf664b4b128f6ad45de1592b14,
build.sourceBranch: refs/heads/master,
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
DB_HOSTNAME: 10.123.56.222,
DB_PORTNUMBER: 1521,
USERNAME: TEST,
PASSWORD: TEST,
SCHEMANAME: SCHEMA,
ACTIVEMQNAME: 10.123.56.223,
ACTIVEMQPORT: 8161
}
If you prefix them then you can try to filter using jq.

Access environment variables set by configMapRef in kubernetes pod

I have a set of environment variables in my deployment using EnvFrom and configMapRef. The environment variables held in these configMaps were set by kustomize originally from json files.
spec.template.spec.containers[0].
envFrom:
- secretRef:
name: eventstore-login
- configMapRef:
name: environment
- configMapRef:
name: eventstore-connection
- configMapRef:
name: graylog-connection
- configMapRef:
name: keycloak
- configMapRef:
name: database
The issue is that it's not possible for me to access the specific environment variables directly.
Here is the result of running printenv in the pod:
...
eventstore-login={
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
}
evironment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
eventstore={
"EVENT_STORE_HOST": "eventstore-cluster",
"EVENT_STORE_PORT": "1113"
}
graylog={
"GRAYLOG_HOST":"",
"GRAYLOG_SERVICE_PORT_GELF_TCP":""
}
...
This means that from my nodejs app I need to do something like this
> process.env.graylog
'{\n "GRAYLOG_HOST":"",\n "GRAYLOG_SERVICE_PORT_GELF_TCP":""\n}\n'
This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:
process.env.GRAYLOG_HOST
To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:
env:
- name: NODE_ENV
value: dev
- name: EVENT_STORE_HOST
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_HOST
- name: EVENT_STORE_PORT
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_PORT
- name: KEYCLOAK_REALM_PUBLIC_KEY
valueFrom:
configMapKeyRef:
name: keycloak-local
key: KEYCLOAK_REALM_PUBLIC_KEY
Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.
Short answer:
You will need to define variables explicitly or change configmaps so they have 1 environment variable = 1 value structure, this way you will be able to refer to them using envFrom. E.g.:
"apiVersion": "v1",
"data": {
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
},
"kind": "ConfigMap",
More details
Configmaps are key-value pairs that means for one key there's only one value, configmaps can get string as data, but they can't work with map.
I tried edited manually the configmap to confirm the above and got following:
invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"
This is the reason why environment comes up as one string instead of structure.
For example this is how configmap.json looks:
$ kubectl describe cm test2
Name: test2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.json:
----
environment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
And this is how it's stored in kubernetes:
$ kubectl get cm test2 -o json
{
"apiVersion": "v1",
"data": {
"test.json": "evironment={\n \"LOTUS_ENV\":\"dev\",\n \"DEV_ENV\":\"dev\"\n}\n"
},
In other words observed behaviour is expected.
Useful links:
ConfigMaps
Configure a Pod to Use a ConfigMap

Serverless.yml: Reference existing environment variable in another

I have a serverless.yml which looks like this
service: my-service
provider:
name: aws
runtime: python3.7
versionFunctions: false
environment:
ACCOUNT_ID: "${file(./serverless.env.yml):${self:provider.stage}.account_id}"
ANOTHER_VARIABLE: "some text ${ACCOUNT_ID} some other text"
Here, I want to reference the existing environment ACCOUNT_ID in ANOTHER_VARIABLE. The ${ACCOUNT_ID} doesn't work. I also tried to look at the serverless documentation but I'm not able to find anything related to that.
You can simply use ${self:provider.environment.ACCOUNT_ID}.
service: my-service
provider:
name: aws
runtime: python3.7
versionFunctions: false
environment:
ACCOUNT_ID: "${file(./serverless.env.yml):${self:provider.stage}.account_id}"
ANOTHER_VARIABLE: "some text ${self:provider.environment.ACCOUNT_ID} some other text"

Environment Variables with Serverless and AWS Lambda

I am learning serverless framework and I'm making a simple login system.
Here is my serverless.yml file
service: lms-auth
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: $(file(../env.yml):MONOGDB_URI)
JWT_SECRET: $(file(../env.yml):JWT_SECRET)
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
As you can see, I have two environment variables and both of them are referencing to a different file in the same root folder.
Here is that env.yml file
MONOGDB_URI: <MY_MONGO_DB_URI>
JWT_SECRET: LmS_JWt_secREt_auth_PasSWoRds
When I do sls deploy, I see that both the variables are logging as null. The environment variables aren't sent to lambda.
How can I fix this?
Also, currently I'm using this method and adding the env.yml to .gitignore and saving the values. Is there any other efficient way of hiding sensitive data?
I would do something like this to help you out with the syntax
service: lms-auth
custom: ${file(env.yml)}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
environment:
MONGODB_URI: ${self:custom.mongodb_uri}
JWT_SECRET: ${self:custom.jwt_secret}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
Then in your env.yml you can do
mongodb_uri: MY_MONGO_DB_URI
jwt_secret: LmS_JWt_secREt_auth_PasSWoRds
Enviroment variables
1. Add useDotenv:true to your .yml file 2.Add your variables like this -> ${env:VARIABLE_NAME}3.Create a file called .env.dev and write the variables (You can add .env.prod but you have to change the stage inside your .yml file ) Example :
service: lms-auth
useDotenv: true
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
environment:
MONGODB_URI: ${env:MONOGDB_URI}
JWT_SECRET: ${env:JWT_SECRET}
functions:
register:
handler: handler.register
events:
- http:
path: auth/register/
method: post
cors: true
login:
handler: handler.login
events:
- http:
path: auth/login/
method: post
cors: true
plugins:
- serverless-offline
.env.dev
MONOGDB_URI = The URI Value
JWT_SECRET = The JWT Scret Value
I ended up solving it. I had set up my Dynamo DB in AWS us-west region. reinitialized in US-East-2, and reset the region under 'provider' within the .yml file.

LambdaFunction - Value of property Variables must be an object with String (or simple type) properties

I am using serverless to deploy my Lambda based application. It was deploying just fine, and then it stopped for some reason. I paired down the entire package to the serverless.yml below and one function in the handler - but I keep getting this error:
Serverless Error ---------------------------------------
An error occurred: TestLambdaFunction - Value of property Variables must be an object with String (or simple type) properties.
Stack Trace --------------------------------------------
Here is the serverless.yml
# serverless.yml
service: some-api
provider:
name: aws
runtime: nodejs6.10
stage: prod
region: us-east-1
iamRoleStatements:
$ref: ./user-policy.json
environment:
config:
region: us-east-1
plugins:
- serverless-local-dev-server
- serverless-dynamodb-local
- serverless-step-functions
package:
exclude:
- .gitignore
- package.json
- README.md
- .git
- ./**.test.js
functions:
test:
handler: handler.test
events:
- http: GET test
resources:
Outputs:
NewOutput:
Description: Description for the output
Value: Some output value
Test Lambda Function in Package
#handler.test
module.exports.test = (event, context, callback) => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
message: 'sadfasd',
input: event
})
})
}
Turns out, this issue does not have any relationship to the Lambda function. Here is the issue that caused the error.
This does NOT work:
environment:
config:
region: us-east-1
This DOES work:
environment:
region: us-east-1
Simply put, I don't think you can have more than one level in your yaml environment variables.
Even if you try sls print as a sanity check, this issue will not pop up. Only in sls deploy.
You have been warned, and hopefully saved!
Other thing that might cause this kind of error is using invalid yaml syntax.
It's easy to get confused about this.
Valid syntax for environment variables
environment:
key: value
Invalid syntax for environment variables
environment:
- key: value
Notice little dash in the code below?
In yaml syntax - means array and therefore, code below is interpreted as array, not object.
So that's why error tells "Value of property Variables must be an object with String (or simple type) properties."
This can be easily fixed by removing - in front of all keys.

Resources