Laravel - configuration cache on elastic beanstalk - linux

I have a Laravel application running on an Elastic Beanstalk environment.
Not having access to the database, S3 and SQS variables I did write a config on ebxtensions to copy some environment variables on the .env file during the deploy using the echo On a .sh hook file on post-deploy like this:
echo -e "AWS_BUCKET=$AWS_BUCKET" >> /var/app/current/.env
The .env file is correctly updated however another .sh hook that runs after that is completed that has the code:
php /var/app/current/artisan config:cache
And this saves the cached config file as the .env file was not updated yet.
Right now the config:cache command needs to be run manually after the deploy but I really want to make the process all automatic.
Any ideas why that happen?

Process of EB deploy is very interesting, take a look at /var/log/eb-activity.log
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/app/current
+ '[' -d /var/app/current ']'
+ mv /var/app/current /var/app/current.old
+ mv /var/app/ondeck /var/app/current
+ nohup rm -rf /var/app/current.old
So, your config:cache running in previous environment which deleted after deploy.
You should use this post-hook in .ebextensions/01-post.config:
files:
/opt/elasticbeanstalk/hooks/appdeploy/post/01_create_cache.sh:
mode: "000755"
owner: root
group: root
content: |
php /var/app/current/artisan config:cache >>/var/log/artisan_test.log
But use it carefully! It takes variables only from .env, not from EB VARIABLES!
Right way will be collect all variables to .env and than generate config cache.
files:
/opt/elasticbeanstalk/hooks/appdeploy/post/01_create_cache.sh:
mode: "000755"
owner: root
group: root
content: |
source /opt/elasticbeanstalk/support/envvars && /usr/bin/php /var/www/html/artisan config:cache >>/var/log/artisan_test.log

Related

AWS CodeDeploy not running hooks scripts

I'm learning how to use CodePipeline and have problem with CodeDeploy for small testing node app. My target is to implement CD for large express + react app and I need to use hooks from AppSpec.yml.
For now everything else is working, files are copied etc, it just doesn't fire script. I started with BeforeInstall (delete process from pm2) and ApplicationStart (start app with pm2) hooks, but now I switched to using ApplicationStart with script to remove process from pm2 just to see if it works.
My AppSpec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/api
permissions:
- object: /home/ubuntu/api/
owner: ubuntu
group: ubuntu
mode: "777"
# I use appStop.sh just to check if this works:
ApplicationStart:
- location: scripts/appStop.sh
runas: ubuntu
# I tried also running as root, still nothing
timeout: 60
appStop.sh:
#!/bin/bash
cd /home/ubuntu/api
pm2 delete 0
I tried many things, also running everything as root (though I prefer to use ubuntu user).
There are no ERRORs in log file in /var/log/aws/codedeploy-agent.
I can also see all files and scripts dir in reviev in /opt/codedeploy-agent/deployment-root/...
When I manually run appStop script in home dir it works.
It looks like CodeDeploy agent is just not running script.
Ok it seems I made it work.
First I cleaned codedeploy-agent data by removing /opt/deployment-root/<deployment droup id> dir and /opt/deployment-root/deployment-instructions
I also changed location, don't know if this helped, but had to do it since I decided to go with root user to make things easier. App is now in /var/www/api.
I also reinstalled all js software (node, pm2, npm) using sudo
My working AppSpec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/api
permissions:
- object: /var/www/api/
mode: 775
type:
- file
- directory
hooks:
ApplicationStop:
- location: scripts/appStop.sh
runas: root
ApplicationStart:
- location: scripts/appStart.sh
runas: root
and working scripts:
appStop.sh:
#!/bin/bash
cd /var/www/api
sudo pm2 delete 0
appStart.sh:
#!/bin/bash
cd /var/www/api
sudo pm2 start server.js

No logs appear on Cloudwatch log group for elastic beanstalk environment

I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"

App engine ignores symlinks to directories

I'm creating an app which runs on Google's App Engine with the custom flex environment. This app uses several (relative) symlinks which point to other directories in the project. But somehow those symlinks are ignored when I deploy the app.
It seems that the gcloud tool sends the source context (which is, all the files in my project) to the google container builder before building and deploying the app:
$ gcloud --project=my-project --verbosity=info app deploy
(...)
Beginning deployment of service [default]...
Building and pushing image for service [default]
INFO: Uploading [/tmp/tmpZ4Jha_/src.tgz] to [eu.gcr.io/my-project/appengine/default.20171212t160803:latest]
Started cloud build [some-uid].
If I extract the contents of the .tgz file I can see that all the files and directories in the project are there. Except for symlinks pointing to directories (symlinks to files are included though). So the source context is missing all the symlinks to directories.
Not using symlinks is not an option, so does anybody know how to include symlinks to directories in the source context send to google?
Although I don't think it's relevant, here are the contents of the app.yaml:
env: flex
runtime: custom
runtime_config:
document_root: docroot
manual_scaling:
instances: 1
resources:
cpu: 2
memory_gb: 2
disk_size_gb: 10
I've worked around this by deploying my python cloud functions from a temp directory, and using tar (on a Mac) to include files inside symlinked directories:
tar hc --exclude='__pycache__' {name} | tar x -C {tmpdirname}
I use a workaround solution similar to Steve Alexander's, but in a more elaborate way: I have a shell script that creates a temp dir, copies the dependencies into in, sets the environment and runs the gcloud command. It is basically something like this:
. .env.sh
SRC_FILE=$1
SRC_FUNC=$2
TRIGGER_RESOURCE=$3
TRIGGER_EVENT=$4
TMP_DIR=./tmp/deploy
mkdir -p $TMP_DIR
cp -r modules/dep1 $TMP_DIR
cp -r modules/dep2 $TMP_DIR
cp requirements.txt $TMP_DIR
cp $SRC_FILE $TMP_DIR/main.py
gcloud functions deploy $SRC_FUNC \
--source=$TMP_DIR \
--runtime=python39 \
--trigger-resource $TRIGGER_RESOURCE \
--trigger-event $TRIGGER_EVENT \
--env-vars-file=./.env.yml \
--timeout 540s
rm -rf $TMP_DIR
This script is tailored for a Google Storage event, ie. to deploy a function that should be triggered when a new file is uploaded to a bucket:
./deploy.func.sh functions.py gs_new_file_event project-bucket1 google.storage.object.finalize
So in the example above gs_new_file_event is a Python function defined in functions.py. The script copies the file with the Python code to the temp dir as main.py which is what the function deployer expects. This works well for a project where there are multiple cloud functions defined in the same repository that also contains dependencies and it is not possible to have all of the apps and functions defined in the top-level main.py. The script removes the temp dir after it is done, but it is a good idea to add the path to .gitingnore.
Here are a few things you can do to adapt the script to your own needs:
Set up the env files with all the required variables: .env.sh for the build and deployment, .env.yml for the function/app runtime.
Fix the paths and dependencies.
Improve the handling of the command line arguments to make it more flexible and work for all kinds of GCloud triggers.

AWS Linux CodeDeploy Permission Issues (w. Bitbucket, Tomcat, Shell Script)

I'm trying to deploy files using CodeDeploy to my AWS Beanstalk server with Tomcat installed. Everything is well configured except for an exception which occurs when appspec.yml calls my .sh script and mvn install command is executed. I've tried all combinations of permissions I've imagined (as well as every StackOverflow answer I've found), but nothing has worked.
Cannot create resource output directory: /opt/codedeploy-agent/deployment-root/f953d455-9712-454b-84b0-2533cf87f79a/d-3UFCDLD0D/deployment-archive/target/classes
I also expected the files section of appspec.yml to get executed before the .sh script gets executed. It should have been working like this:
appspec.yml moves all files to webapps folder
build.sh gets executed
mvn runs and creates the .war file
build.sh does some cleaning up
appspec.yml (I've tried multiple other)
version: 0.0
os: linux
files:
- source: /
destination: /var/lib/tomcat8/webapps
permissions:
- object: /opt/codedeploy-agent/deployment-root
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
- object: /var/lib/tomcat8/webapps
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
hooks:
BeforeInstall:
- location: scripts/build.sh
runas: ec2-user
build.sh
export LANG=en_US.UTF-8
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo "Script path: $SCRIPTPATH"
PROJECT_SOURCE_DIR=$SCRIPTPATH/../
cd $PROJECT_SOURCE_DIR
mvn clean install
cd $PROJECT_SOURCE_DIR/target
ls -a
for file in *.war; do
mv $file /usr/share/tomcat8/webapps/ROOT.war
done;
rm -rf $PROJECT_SOURCE_DIR/target
rm -rf $SCRIPTPATH
It's obvious from the exception that maven tries to create a folder target without having the permissions. So the questions are why on the first place it's trying to execute it in this folder and then how to gain proper access.
The way to solve the problem is to add command to change to proper directory before run "mvn clean install" instead of PROJECT_SOURCE_DIR.
Install is the lifecycle event that AWS CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts. The related doc is here: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
The directory that you are getting error is actually under the deployment archive directory as showing here: https://github.com/aws/aws-codedeploy-agent/blob/master/lib/instance_agent/plugins/codedeploy/hook_executor.rb#L174
The reason you got the error is because the build.sh script is running at the current directory which needs root privilege and scripts/build.sh only has ex2-user privilege, which caused the permission issue.

avoid rebuilding node_modules in elastic beanstalk

We have a fairly simple node.js app, but due to AWS Elastic Beanstalk deployment mechanism, it takes about 5 minutes to roll-out a new version (via git aws.push) even after a single file commit.
I.e. the commit itself (and upload) is fast (only 1 file to push), but then Elastic Beanstalk fetches whole package from S3, unzips it and runs npm install, which causes node-gyp to compile some modules. Upon installation/building completion, Elastic Beanstalk wipes /var/app/current and replaces it with the new app version.
Needless to say, constant node_modules rebuilding is not necessary, and rebuilding that takes 30 seconds on my old Macbook Air, takes >5 mins on a ec2.micro instance, not fun.
I see two approaches here:
tweak /opt/containerfiles/ebnode.py and play with node_modules location to avoid its removal and rebuilding upon deployment.
set up a git repo on Elastic Beanstalk EC2 instance and basically re-write deployment procedure ourselves, so /var/app/current receives pushes and runs npm install only when necessary (which makes Elastic Beanstalk to look like OpsWorks..)
Both options lack grace and are prone to issues when Amazon updates their Elastic Beanstalk hooks and architecture.
Maybe somebody has a better idea how to avoid constant rebuilding of node_modules that are already present in the app dir? Thank you.
Thanks Kirill, it was really helpful !
I'm just sharing my config file for people who just look the simple solution to the npm install. This file needs to be placed in the .ebextensions folder of the project, it is lighter since it doesn't include last version of node installation, and ready to use.
It also dynamically checks the node version installed, so no need for it to be included in the env.vars file.
.ebextensions/00_deploy_npm.config
files:
"/opt/elasticbeanstalk/env.vars" :
mode: "000775"
owner: root
group: users
content: |
export NPM_CONFIG_LOGLEVEL=error
export NODE_PATH=`ls -td /opt/elasticbeanstalk/node-install/node-* | head -1`/bin
"/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh" :
mode: "000775"
owner: root
group: users
content: |
#!/bin/bash
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#install not-installed yet app node_modules
if [ ! -d "/var/node_modules" ]; then
mkdir /var/node_modules ;
fi
if [ -d /tmp/deployment/application ]; then
ln -s /var/node_modules /tmp/deployment/application/
fi
OUT=$([ -d "/tmp/deployment/application" ] && cd /tmp/deployment/application && $NODE_PATH/npm install 2>&1) || error_exit "Failed to run npm install. $OUT" $?
echo $OUT
"/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh" :
mode: "000666"
owner: root
group: users
content: |
#no need to run npm install during configdeploy
25/01/13 NOTE: updated scripts to run npm -g version upgrade (only once, on initial instance roll out or rebuild) and to avoid NPM operations during EB configuration change (when app dir is not present, to avoid error and to speed up configuration updates).
Okay, Elastic Beanstalk behaves dodgy with recent node.js builds (including presumably supported v.0.10.10), so I decided to go ahead and tweak EB to do the following:
to install ANY node.js version as per your env.config (including
the most recent ones that are not yet supported by AWS EB)
to avoid rebuilding existing node modules, including in-app
node_modules dir
to install node.js globally (and any desired module as well).
Basically, I use env.config to replace deploy&config hooks with customized ones (see below). Also, in a default EB container setup some env variables are missing ($HOME for example) and node-gyp sometimes fails during rebuild because of it (took me 2 hours of googling and reinstalling libxmljs to resolve this).
Below are the files to be included along with your build. You can inject them via env.config as inline code or via source: URL (as in this example)
env.vars (desired node version & arch are included here and in env.config, see below)
export HOME=/root
export NPM_CONFIG_LOGLEVEL=error
export NODE_VER=0.10.24
export ARCH=x86
export PATH="$PATH:/opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/:/root/.npm"
40install_node.sh (fetch and ungzip desired node.js version, make global symlinks, update global npm version)
#!/bin/bash
#source env variables including node version
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#UNCOMMENT to update npm, otherwise will be updated on instance init or rebuild
#rm -f /opt/elasticbeanstalk/node-install/npm_updated
#download and extract desired node.js version
OUT=$( [ ! -d "/opt/elasticbeanstalk/node-install" ] && mkdir /opt/elasticbeanstalk/node-install ; cd /opt/elasticbeanstalk/node-install/ && wget -nc http://nodejs.org/dist/v$NODE_VER/node-v$NODE_VER-linux-$ARCH.tar.gz && tar --skip-old-files -xzpf node-v$NODE_VER-linux-$ARCH.tar.gz) || error_exit "Failed to UPDATE node version. $OUT" $?.
echo $OUT
#make sure node binaries can be found globally
if [ ! -L /usr/bin/node ]; then
ln -s /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/node /usr/bin/node
fi
if [ ! -L /usr/bin/npm ]; then
ln -s /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/npm /usr/bin/npm
fi
if [ ! -f "/opt/elasticbeanstalk/node-install/npm_updated" ]; then
/opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/ && /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/npm update npm -g
touch /opt/elasticbeanstalk/node-install/npm_updated
echo "YAY! Updated global NPM version to `npm -v`"
else
echo "Skipping NPM -g version update. To update, please uncomment 40install_node.sh:12"
fi
50npm.sh (creates /var/node_modules, symlinks it to app dir and runs npm install. You can install any module globally from here, they will land in /root/.npm)
#!/bin/bash
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#install not-installed yet app node_modules
if [ ! -d "/var/node_modules" ]; then
mkdir /var/node_modules ;
fi
if [ -d /tmp/deployment/application ]; then
ln -s /var/node_modules /tmp/deployment/application/
fi
OUT=$([ -d "/tmp/deployment/application" ] && cd /tmp/deployment/application && /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/npm install 2>&1) || error_exit "Failed to run npm install. $OUT" $?
echo $OUT
env.config (note node version here too, and to be safe, put desired node version in env config in AWS console as well. I'm not certain which of these settings will take precedence.)
packages:
yum:
git: []
gcc: []
make: []
openssl-devel: []
option_settings:
- option_name: NODE_ENV
value: production
- option_name: RDS_HOSTNAME
value: fill_me_in
- option_name: RDS_PASSWORD
value: fill_me_in
- option_name: RDS_USERNAME
value: fill_me_in
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.24
files:
"/opt/elasticbeanstalk/env.vars" :
mode: "000775"
owner: root
group: users
source: https://dl.dropbox.com/....
"/opt/elasticbeanstalk/hooks/configdeploy/pre/40install_node.sh" :
mode: "000775"
owner: root
group: users
source: https://raw.github.com/....
"/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh" :
mode: "000775"
owner: root
group: users
source: https://raw.github.com/....
"/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh" :
mode: "000666"
owner: root
group: users
content: |
#no need to run npm install during configdeploy
"/opt/elasticbeanstalk/hooks/appdeploy/pre/40install_node.sh" :
mode: "000775"
owner: root
group: users
source: https://raw.github.com/....
There you have it: on t1.micro instance deployment now takes 20-30 secs instead of 10-15 minutes! If you deploy 10 times a day, this tweak will save you 3 (three) weeks in a year.
Hope it helps and special thanks to AWS EB staff for my lost weekend :)
There's npm package that's overwriting default EB behaviour for npm install command by truncating following files:
/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh
/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh
https://www.npmjs.com/package/eb-disable-npm
Might be better than just copying script from SO, since this package is maintained and probably will be updated when EB behaviour will change.
I've found a quick solution to this. I looked through the build scripts that Amazon are using and they only run npm install if package.json is present. So after your initial deploy you can change it to _package.json and npm install won't run anymore! It's not the best solution but it's a quick fix if you need one!
I had 10+ minute builds when I would deploy. The solution was much simpler than others have came up with... Just check node_modules into git! See http://www.futurealoof.com/posts/nodemodules-in-git.html for the reasoning

Resources