AWS CodeDeploy not running hooks scripts - node.js

I'm learning how to use CodePipeline and have problem with CodeDeploy for small testing node app. My target is to implement CD for large express + react app and I need to use hooks from AppSpec.yml.
For now everything else is working, files are copied etc, it just doesn't fire script. I started with BeforeInstall (delete process from pm2) and ApplicationStart (start app with pm2) hooks, but now I switched to using ApplicationStart with script to remove process from pm2 just to see if it works.
My AppSpec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/api
permissions:
- object: /home/ubuntu/api/
owner: ubuntu
group: ubuntu
mode: "777"
# I use appStop.sh just to check if this works:
ApplicationStart:
- location: scripts/appStop.sh
runas: ubuntu
# I tried also running as root, still nothing
timeout: 60
appStop.sh:
#!/bin/bash
cd /home/ubuntu/api
pm2 delete 0
I tried many things, also running everything as root (though I prefer to use ubuntu user).
There are no ERRORs in log file in /var/log/aws/codedeploy-agent.
I can also see all files and scripts dir in reviev in /opt/codedeploy-agent/deployment-root/...
When I manually run appStop script in home dir it works.
It looks like CodeDeploy agent is just not running script.

Ok it seems I made it work.
First I cleaned codedeploy-agent data by removing /opt/deployment-root/<deployment droup id> dir and /opt/deployment-root/deployment-instructions
I also changed location, don't know if this helped, but had to do it since I decided to go with root user to make things easier. App is now in /var/www/api.
I also reinstalled all js software (node, pm2, npm) using sudo
My working AppSpec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/api
permissions:
- object: /var/www/api/
mode: 775
type:
- file
- directory
hooks:
ApplicationStop:
- location: scripts/appStop.sh
runas: root
ApplicationStart:
- location: scripts/appStart.sh
runas: root
and working scripts:
appStop.sh:
#!/bin/bash
cd /var/www/api
sudo pm2 delete 0
appStart.sh:
#!/bin/bash
cd /var/www/api
sudo pm2 start server.js

Related

Elastic Beanstalk: log task customization on Amazon Linux 2 platforms

I'm wondering how to do log task customization in the new Elastic Beanstalk platform (the one based on Amazon Linux 2). Specifically, I'm comparing:
Old: Single-container Docker running on 64bit Amazon Linux/2.14.3
New: Single-container Docker running on 64bit Amazon Linux 2/3.0.0
(My question actually has nothing to do with Docker as such, I'm speculating the problem exist for any of the new Elastic Beanstalk platforms).
Previously I could follow Amazon's recipe, meaning put a file into /opt/elasticbeanstalk/tasks/bundlelogs.d/ and it would then be acted upon. This is no longer true.
Has this changed? I can't find it documented. Anyone been successful in doing log task customization on the newer Elastic Beanstalk platform? If so, how?
Minimal working example
I've created a minimal working example and deployed on both platforms.
Dockerfile:
FROM ubuntu
COPY daemon-run.sh /daemon-run.sh
RUN chmod +x /daemon-run.sh
EXPOSE 80
ENTRYPOINT ["/daemon-run.sh"]
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Logging": "/var/mydaemon"
}
daemon-run.sh:
#!/bin/bash
echo "Starting daemon" # output to stdout
mkdir -p /var/mydaemon/deeperlogs
while true; do
echo "$(date '+%Y-%m-%dT%H:%M:%S%:z') Hello World" >> /var/mydaemon/deeperlogs/app_$$.log
sleep 5
done
.ebextensions/mydaemon-logfiles.config:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/mydaemon-logs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/eb-docker/containers/eb-current-app/deeperlogs/*.log
If I do "Full Logs" action on the old platform I would get a ZIP with my deeperlogs included
inside var/log/eb-docker/containers/eb-current-app. On the new platform I don't.
Investigation
If you look on the disk you'll see that the new Elastic Beanstalk doesn't have a /opt/elasticbeanstalk/tasks folder at all, unlike the old one. Hmm.
On Amazon Linux 2 the folder is:
/opt/elasticbeanstalk/config/private/logtasks/bundle
The .ebextensions/mydaemon-logfiles.config should be:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/mydaemon-logs.conf":
mode: "000644"
owner: root
group: root
content: |
/var/mydaemon/deeperlogs/*.log
container_commands:
append_deeperlogs_to_applogs:
command: echo -e "\n/var/log/eb-docker/containers/eb-current-app/deeperlogs/*" >> /opt/elasticbeanstalk/config/private/logtasks/bundle/applogs
The mydaemon-logfiles.config also adds deeperlogs into applogs file. Without it deeperlogs will not be included in the download log zip bundle. Which is intresting, because the folder will be in the correct location, i.e., /var/log/eb-docker/containers/eb-current-app/deeperlogs/. But without being explicitly listed in applogs, it will be skipped when zip bundle is being generated.
I tested it with single docker environment (3.0.1).
The full log bundle successful contained deeperlogs with correct log data:
Hope that this will help. I haven't found any references for that. AWS documentaiton does not document this, as it is mostly based on Amazon Linux 1, not Amazon Linux 2.
Amazon has fixed this problem in version of the Elastic Beanstalk AL2 platforms released on 04-AUG-2020.
It has been fixed so that log task customization on AL2-based platforms now works the way it has always worked (i.e. on the prevision generation AL2018 platforms) and you can therefore follow the official documentation in order to make this happen.
Succesfully tested with platform "Docker running on 64bit Amazon Linux 2/3.1.0". If you (still) use "Docker running on 64bit Amazon Linux 2/3.0.x" then you must use the undocumented workaround described in Marcin's answer but you are probably better off by upgrading your platform version.
As of 2021/11/05, I tried the accepted answer and various other examples including the latest official documentation on using the .ebextensions folder with *.config files without success.
Most likely something I was doing wrong but here's what worked for me.
The version I'm using: Docker running on 64bit Amazon Linux 2/3.4.8
Simply, add a volume to your docker-compose.yml file to share your application logs to the Elastic Beanstalk log directory.
Example docker-compose.yml:
version: "3.9"
services:
app:
build: .
ports:
- "80:80"
user: root
volumes:
- ./:/var/www/html
# "${EB_LOG_BASE_DIR}/<service name>:<log directory inside container>
- "${EB_LOG_BASE_DIR}/app:/var/www/html/application/logs" # ADD THIS LINE
env_file:
- .env
For more info, here's the documentation I followed.
Hopefully, this helps future readers like myself 👍

how to auto restart node app after aws codeDeploy

I have setup travis to aws codeDeploy for a node app. Now that the latest code can be deployed to EC2 correctly, but I need to manually restart the node app again to make the changes take effect.
How to auto restart the node app after codeDeploy? I guess I can do it by setting afterInstall in appspec.yml, but I find that many tutorial/walkthrough didn't mention about this, so I wonder is this the only/best way to restart the node app.
You should use the ApplicationStart hook in appspec.yml. From the AWS docs:
AfterInstall – You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions.
ApplicationStart – You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop.
You can use pm2 or forever and you can restart your node after code deploy. Your appspec.yml should be as below
version: 0.0
os: linux
files:
- source: /deployment.zip
destination: /home/ubuntu/PorjDir/
hooks:
BeforeInstall:
- location: stop_server.sh
timeout: 300
runas: ubuntu
AfterInstall:
- location: start_server.sh
timeout: 300
runas: ubuntu
Note: You need to create stop_server.sh and start_server.sh into your project and need to export it as artifact so code deploy agent can use it.
start_server.sh should contains below:
sudo /usr/bin/pm2 restart appname
Hope this will help you!!

npm start fails to run with AWS Code Deploy on AWS Windows instances

​I am trying to deploy a Node.js application on windows EC2 instances. Deployment finishes successfully but node server is not started automatically on those instances. I've to login to each instance to run command node app.js to start node server. I tried to include this command in appspec.yml file but then I got below error,
LifecycleEvent - ApplicationStart
Script - node_start.bat
[stdout]
[stdout]C:\Windows\system32>cd C:/host/
[stdout]
[stdout]C:\host>npm start
[stderr]'npm' is not recognized as an internal or external command,
[stderr]operable program or batch file.
​
My appspec.yml file is as below,
version: 0.0
os: windows
files:
- source: app.js
destination: c:\host
- source: package.json
destination: c:\host
- source: \node_modules
destination: c:\host\node_modules
- source: node_start.bat
destination: c:\host
- source: before_install.bat
destination: c:\host
hooks:
AfterInstall:
- location: before_install.bat
timeout: 300
ApplicationStart:
- location: node_start.bat
timeout: 300
Node is already installed on those two instances and Path variable is also properly set. Logging manually to those servers and running command npm start works perfectly fine. It fails only though AWS Code deploy.
I want to introduce AWS Lambda function also after this step to do health check.
Any help to resolve this issue would be greatly appreciated.
The issue isn't really related to CodeDeploy. CodeDeploy simply runs the script that you give it; in your case, node_start.bat. Perhaps, this question answers the issue that you're really having. CodeDeploy has no knowledge of your application unless you tell it.
You will likely either need to edit node_start.bat or your environment so that npm is a valid command.
Here are a couple of suggestions to help your case:
Test your appspec and scripts
You can test your appspec.yml and related scripts using the CodeDeploy local CLI, which comes with the agent.
Validate that your service is running
Obviously, it's not awesome if your deployment succeeds, but your application is not actually running. However, you can use the ValidateService lifecycle hook to confirm that your application is actually running or any other validation. You can include a script that can see that the process is running, confirm that no errors are getting logged, run tests, or whatever else you might want.

Laravel - configuration cache on elastic beanstalk

I have a Laravel application running on an Elastic Beanstalk environment.
Not having access to the database, S3 and SQS variables I did write a config on ebxtensions to copy some environment variables on the .env file during the deploy using the echo On a .sh hook file on post-deploy like this:
echo -e "AWS_BUCKET=$AWS_BUCKET" >> /var/app/current/.env
The .env file is correctly updated however another .sh hook that runs after that is completed that has the code:
php /var/app/current/artisan config:cache
And this saves the cached config file as the .env file was not updated yet.
Right now the config:cache command needs to be run manually after the deploy but I really want to make the process all automatic.
Any ideas why that happen?
Process of EB deploy is very interesting, take a look at /var/log/eb-activity.log
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/app/current
+ '[' -d /var/app/current ']'
+ mv /var/app/current /var/app/current.old
+ mv /var/app/ondeck /var/app/current
+ nohup rm -rf /var/app/current.old
So, your config:cache running in previous environment which deleted after deploy.
You should use this post-hook in .ebextensions/01-post.config:
files:
/opt/elasticbeanstalk/hooks/appdeploy/post/01_create_cache.sh:
mode: "000755"
owner: root
group: root
content: |
php /var/app/current/artisan config:cache >>/var/log/artisan_test.log
But use it carefully! It takes variables only from .env, not from EB VARIABLES!
Right way will be collect all variables to .env and than generate config cache.
files:
/opt/elasticbeanstalk/hooks/appdeploy/post/01_create_cache.sh:
mode: "000755"
owner: root
group: root
content: |
source /opt/elasticbeanstalk/support/envvars && /usr/bin/php /var/www/html/artisan config:cache >>/var/log/artisan_test.log

AWS Linux CodeDeploy Permission Issues (w. Bitbucket, Tomcat, Shell Script)

I'm trying to deploy files using CodeDeploy to my AWS Beanstalk server with Tomcat installed. Everything is well configured except for an exception which occurs when appspec.yml calls my .sh script and mvn install command is executed. I've tried all combinations of permissions I've imagined (as well as every StackOverflow answer I've found), but nothing has worked.
Cannot create resource output directory: /opt/codedeploy-agent/deployment-root/f953d455-9712-454b-84b0-2533cf87f79a/d-3UFCDLD0D/deployment-archive/target/classes
I also expected the files section of appspec.yml to get executed before the .sh script gets executed. It should have been working like this:
appspec.yml moves all files to webapps folder
build.sh gets executed
mvn runs and creates the .war file
build.sh does some cleaning up
appspec.yml (I've tried multiple other)
version: 0.0
os: linux
files:
- source: /
destination: /var/lib/tomcat8/webapps
permissions:
- object: /opt/codedeploy-agent/deployment-root
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
- object: /var/lib/tomcat8/webapps
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
hooks:
BeforeInstall:
- location: scripts/build.sh
runas: ec2-user
build.sh
export LANG=en_US.UTF-8
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo "Script path: $SCRIPTPATH"
PROJECT_SOURCE_DIR=$SCRIPTPATH/../
cd $PROJECT_SOURCE_DIR
mvn clean install
cd $PROJECT_SOURCE_DIR/target
ls -a
for file in *.war; do
mv $file /usr/share/tomcat8/webapps/ROOT.war
done;
rm -rf $PROJECT_SOURCE_DIR/target
rm -rf $SCRIPTPATH
It's obvious from the exception that maven tries to create a folder target without having the permissions. So the questions are why on the first place it's trying to execute it in this folder and then how to gain proper access.
The way to solve the problem is to add command to change to proper directory before run "mvn clean install" instead of PROJECT_SOURCE_DIR.
Install is the lifecycle event that AWS CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts. The related doc is here: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
The directory that you are getting error is actually under the deployment archive directory as showing here: https://github.com/aws/aws-codedeploy-agent/blob/master/lib/instance_agent/plugins/codedeploy/hook_executor.rb#L174
The reason you got the error is because the build.sh script is running at the current directory which needs root privilege and scripts/build.sh only has ex2-user privilege, which caused the permission issue.

Resources