I was getting a fairly ambiguous error from my app.config elastic beanstalk configuration file.
app.config
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: production
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.26
packages:
yum:
GraphicsMagick: []
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 10M;
command line
> eb create --cfg med-env-sc
...
Printing Status:
INFO: createEnvironment is starting.
INFO: Using elasticbeanstalk-us-east-1-466215906046 as Amazon S3 storage bucket for environment data.
ERROR: InvalidParameterValue: Parameter values must be a string or a list of strings
ERROR: Failed to launch environment.
ERROR: Failed to launch environment.
Turns out I had fixed the error. (An incorrect value for NodeVersion). You just need to commit your config file before elastic beanstalk will recognise it.
Related
I have the following Serverless configuration file:
service: aws-node-scheduled-cron-project
frameworkVersion: '2 || 3'
plugins:
- serverless-plugin-typescript
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
region: us-west-2
# Imports all the enviromental variables in `.env.yml` file
environment: ${file(./env.yml)}
functions:
...
env.yml
DATABASE_HOST: 127.0.0.1
DATABASE_USER: root
DATABASE_PASSWORD: your_mysql_root_password
DATABASE_TABLE: laravel
DATABASE_PORT: 3306
DATABASE_USE_SSL: false
NOTIFICATION_SERVICE_URL: http://localhost:4000
Now I would like to change DATABASE_TABLE on the fly when invoking the function locally. I have tried:
export DATABASE_TABLE=1-30-task-template-schedule && npx serverless invoke local --function notifyTodoScheduleFullDay
but the variable DATABASE_TABLE gets overwritten by the one in env.yml. Is it possible to do it via command line?
On your yml file you can declare the table name as ${opt:tablename,'DEFAULT'}.
This line means that you are going to give the name as a parameter from the terminal command like this serverless deploy ... -tablename NAME_OF_THE_TABLE if you do not give it as a parameter it takes the default name you can give it. This can generate a name on the fly.
I have recently started working on a project in which we are using serverless framework. We are using docker to make dev environment easier to setup.
As part of this docker setup we have created a script that creates S3 buckets & tables among other things. We were earlier defining environment variables in the docker-compose file and were accessing them in our nodejs app. For purposes of deployment to other environments our devops team defined a few environment variables in the serverless.yaml file resulting in environment configs being present at two places. We are now planning to move all the environment configs defined in our docker-compose file to serverless.yaml. This works well for our lambdas functions as they are able to read these configs, but it doesn't work for the standalone setup script that we have written.
I tried using this plugin(serverless-scriptable-plugin) in an attempt to be able to read these env variables but still unable to do so.
Here is my serverless.yaml file
service:
name: my-service
frameworkVersion: '2'
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x
region: 'us-east-1'
profile: ${env:PROFILE, 'dev'}
stackName: stack-${self:provider.profile}
apiName: ${self:custom.environment_prefix}-${self:service.name}-my-api
environment: ${self:custom.environment_variables.${self:provider.profile}}
plugins:
- serverless-webpack
- serverless-scriptable-plugin
- serverless-offline-sqs
- serverless-offline
functions:
myMethod:
handler: handler.myHandler
name: ${self:custom.environment_prefix}-${self:service.name}-myHandler
events:
- sqs:
arn:
Fn::GetAtt:
- MyQueue
- Arn
resources:
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.queuename}
Tags:
- Key: product
Value: common
- Key: service
Value: common
- Key: function
Value: ${self:service.name}
- Key: region
Value: ${env:REGION}
package:
individually: true
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverless-offline:
host: 0.0.0.0
port: 3000
serverless-offline-sqs:
apiVersion: '2012-11-05'
endpoint: http://sqs:9324
region: ${self:provider.region}
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
localstack:
stages:
- local
lambda:
mountCode: true
debug: true
environment_prefixes:
staging: staging
production: production
dev: dev
environment_prefix: ${self:custom.environment_prefixes.${self:provider.profile}}
queuename: 'myQueue'
environment_variables:
dev:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
BUCKET_NAME: my-bucket
S3_URL: http://localstack:4566
SLS_DEBUG: '*'
scriptable:
commands:
setup: node /app/dockerEntrypoint.js
In my DockerFile I try executing script using sls setup CMD. I initially thought using sls command might expose these environment variables defined in serverless.yaml file but it doesn't seem to happen.
Is there any other way this can be achieved? I am trying to access these variables using process.env which works for lambdas but not for my standalone script. Thanks!
There's not a good way to get access to these environment variables if you're running the lambda code as a script.
The Serverless Framework injects these variables into the Lambda function runtime configuration via CloudFormation.
It does not insert/update the raw serverless.yml file, nor does it somehow intercept calls to process.env via the node process.
You'll could use the scriptable plugin to run after package, and then export each variable into your local docker environment. But that seems pretty heavy for the variables in your env.
Instead, you might consider something like dotenv, which will load variables from a .env file into your environment.
There is a serverless-dotenv plugin you could use, and then your script could also call dotenv before running.
I've recently started using AdonisJS for API development.
I'm using AWS Elastic Beanstalk together with AWS CodeCommit and AWS CodePipeline to deploy new code on each git push.
Since .env file is not present in git repository, I've added env variables through Elastic Beanstalk web console.
But deployment failed when I tried to run node ace migration:run command.
Activity execution failed, because:
Error: ENOENT: no such file or directory, open '/tmp/deployment/application/.env'
1 Env.load
/tmp/deployment/application/node_modules/#adonisjs/framework/src/Env/index.js:110
2 new Env
/tmp/deployment/application/node_modules/#adonisjs/framework/src/Env/index.js:42
3 Object.app.singleton [as closure]
/tmp/deployment/application/node_modules/#adonisjs/framework/providers/AppProvider.js:29
4 Ioc._resolveBinding
/tmp/deployment/application/node_modules/#adonisjs/fold/src/Ioc/index.js:231
5 Ioc.use
/tmp/deployment/application/node_modules/#adonisjs/fold/src/Ioc/index.js:731
6 AppProvider.boot
/tmp/deployment/application/node_modules/#adonisjs/framework/providers/AppProvider.js:337
7 _.filter.map
/tmp/deployment/application/node_modules/#adonisjs/fold/src/Registrar/index.js:147
8 arrayMap
/tmp/deployment/application/node_modules/lodash/lodash.js:639
(ElasticBeanstalk::ExternalInvocationError)
Then I've tried to add ENV_SILENT=true flag before each command as stated in AdonisJS documentation. But that did not help.
So then, I've tried to upload .env file on S3 bucket, and copy its contents during deployment.
But it seems it does not work, since I'm getting the same error (no .env file).
These are my 2 config files from .ebextensions folder
01_copy_env.config (I'm using x-xxxxxxxxxxxx here for security)
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["elasticbeanstalk-us-east-x-xxxxxxxxxxxx"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
files:
"/tmp/deployment/application/.env":
mode: "000755"
owner: root
group: root
authentication: "S3Auth"
source: https://elasticbeanstalk-us-east-x-xxxxxxxxxxxx.s3.us-east-2.amazonaws.com/variables.txt
02_init.config
container_commands:
01_node_binary:
command: "ln -sf `ls -td /opt/elasticbeanstalk/node-install/node-v10* | head -1`/bin/node /bin/node"
leader_only: true
02_migration:
command: "node ace migration:run"
03_init_seed:
command: "node ace seed"
The only time the whole thing works is when I add .env file to git and deploy it with the rest of the code. But that is not the way to go, so if anyone knows a solution to my problem I would really appreciate it. Thanks!
Add new variable ENV_SILENT = true on your global variables (Elastic Beanstalk)
Adonis documentation
I am currently using an .ebextensions file to initialize my system and pull some .pem files from my S3 bucket. However, I constantly get access denied errors when trying to read this file within my node.js application. I've confirmed the contents of the file pulled from S3 are correct.
Is there an issue my configuration file?
files:
"/home/ec2-user/certificates/cert.pem":
mode: "000777"
owner: nodejs
group: users
source: "https://s3-us-west-2.amazonaws.com/myBucket/folder/cert.pem"
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myBucket
Error given by node.js:
Error: EACCES, permission denied '/home/ec2-user/certificates/cert.pem'
Your node.js application user, nodejs, is not allowed to download a file into the ec2-user home directory. You can instead download the file to the /tmp directory, and then move it using a container command. The container commands are executed from the root of the staged application right before it is deployed.
files:
"/tmp/cert.pem":
mode: "000777"
owner: nodejs
group: users
source: "https://s3-us-west-2.amazonaws.com/myBucket/folder/cert.pem"
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myBucket
container_commands:
file_transfer_1:
command: "mkdir -p certificates"
file_transfer_2:
command: "mv /tmp/cert.pem certificates/."
Here is an excerpt of my EB configuration file .ebextensions/app.config:
option_settings:
- option_name: AWS_SECRET_KEY
value: xxxxxxxxxx
- option_name: AWS_ACCESS_KEY_ID
value: xxxxxxxxxx
- option_name: APP_ENV
value: development
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: ProxyServer
value: nginx
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: GzipCompression
value: true
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.8.10
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeCommand
value: npm start
commands:
test_command:
command: echo $APPLICATION_ENV > /home/ec2-user/test.txt
cwd: /home/ec2-user
ignoreErrors: true
then I do the normal thing:
$ git commit -am "wrote config file"
$ eb init
...
$ eb start
...
would you like to use the most recent commit [y/n]
$ y
Then after deploy is complete and in green state, looking inside the eb generated .elasticbeansalk/optionsettings.myapp-env file I found:
[aws:elasticbeanstalk:application:environment]
PARAM1=
PARAM2=
PARAM3=
PARAM4=
PARAM5=
[aws:elasticbeanstalk:container:nodejs]
GzipCompression=false
NodeCommand=
NodeVersion=0.8.24
ProxyServer=nginx
My environment variable was not set, the NodeCommand directive was not set, and the NodeVersion has been ignored. What gives, EB? How can it ignore certain directives and not others? Any ideas on what I'm doing wrong?
EDIT
according to this post, the JSON holding the environment variables is held here:
/opt/elasticbeanstalk/deploy/configuration/containerconfiguration
which means I can parse this fiel for the variables, but this is frustrating since it's supposed to be taken care of with the configuration file (otherwise why have one?). There still must be an issue with my configuration file, otherwise EB seems completely broken in this respect...
I've run into this problem as well. I believe there are 2 ways to solve it.
Run "eb update" this should update your environment and hopefully grab the variables.
Create a new env and deploy your code into that environment. Once everything is good then point dns at the new env and delete the old one.
Also I read somewhere (aws forum I believe) that if you update the env in the elastic beanstalk gui interface that those values will take precedence over anything you put in the source code.