AWS Linux CodeDeploy Permission Issues (w. Bitbucket, Tomcat, Shell Script) - linux

I'm trying to deploy files using CodeDeploy to my AWS Beanstalk server with Tomcat installed. Everything is well configured except for an exception which occurs when appspec.yml calls my .sh script and mvn install command is executed. I've tried all combinations of permissions I've imagined (as well as every StackOverflow answer I've found), but nothing has worked.
Cannot create resource output directory: /opt/codedeploy-agent/deployment-root/f953d455-9712-454b-84b0-2533cf87f79a/d-3UFCDLD0D/deployment-archive/target/classes
I also expected the files section of appspec.yml to get executed before the .sh script gets executed. It should have been working like this:
appspec.yml moves all files to webapps folder
build.sh gets executed
mvn runs and creates the .war file
build.sh does some cleaning up
appspec.yml (I've tried multiple other)
version: 0.0
os: linux
files:
- source: /
destination: /var/lib/tomcat8/webapps
permissions:
- object: /opt/codedeploy-agent/deployment-root
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
- object: /var/lib/tomcat8/webapps
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
hooks:
BeforeInstall:
- location: scripts/build.sh
runas: ec2-user
build.sh
export LANG=en_US.UTF-8
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo "Script path: $SCRIPTPATH"
PROJECT_SOURCE_DIR=$SCRIPTPATH/../
cd $PROJECT_SOURCE_DIR
mvn clean install
cd $PROJECT_SOURCE_DIR/target
ls -a
for file in *.war; do
mv $file /usr/share/tomcat8/webapps/ROOT.war
done;
rm -rf $PROJECT_SOURCE_DIR/target
rm -rf $SCRIPTPATH
It's obvious from the exception that maven tries to create a folder target without having the permissions. So the questions are why on the first place it's trying to execute it in this folder and then how to gain proper access.

The way to solve the problem is to add command to change to proper directory before run "mvn clean install" instead of PROJECT_SOURCE_DIR.
Install is the lifecycle event that AWS CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts. The related doc is here: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
The directory that you are getting error is actually under the deployment archive directory as showing here: https://github.com/aws/aws-codedeploy-agent/blob/master/lib/instance_agent/plugins/codedeploy/hook_executor.rb#L174
The reason you got the error is because the build.sh script is running at the current directory which needs root privilege and scripts/build.sh only has ex2-user privilege, which caused the permission issue.

Related

gitlab container scanner can't install aws-cli

In the gitlab CI docs (https://docs.gitlab.com/ee/user/application_security/container_scanning/), it states you can scan ECR using the following:
container_scanning:
before_script:
- ruby -r open-uri -e "IO.copy_stream(URI.open('https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip'), 'awscliv2.zip')"
- unzip awscliv2.zip
- ./aws/install
- aws --version
- export AWS_ECR_PASSWORD=$(aws ecr get-login-password --region region)
include:
- template: Security/Container-Scanning.gitlab-ci.yml
DOCKER_IMAGE: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<image>:<tag>
DOCKER_USER: AWS
DOCKER_PASSWORD: "$AWS_ECR_PASSWORD"
When I add the "before_script", i get the following:
inflating: aws/dist/cryptography-3.3.2-py3.9.egg-info/LICENSE
inflating: aws/dist/cryptography-3.3.2-py3.9.egg-info/WHEEL
creating: aws/dist/cryptography/hazmat/
creating: aws/dist/cryptography/hazmat/bindings/
inflating: aws/dist/cryptography/hazmat/bindings/_openssl.abi3.so
$ ./aws/install
mkdir: cannot create directory ‘/usr/local/aws-cli’: Permission denied
Uploading artifacts for failed job
00:00
Uploading artifacts...
WARNING: gl-container-scanning-report.json: no matching files
seems it doesn't have permissions. Is there another way to get it to work? Thanks!
The container_scanning job (by default) uses the docker image registry.gitlab.com/security-products/container-scanning:4
You can also see this image specifies its user as gitlab, which implies to me that the user in the image, unlike most images you might traditionally use, does not have root privileges by default.
This user will, therefore, not have permission to write to /usr/local/
You can probably work around this by using sudo
- sudo ./aws/install
(or as you stated, you can direct the installation to another location that doesn't require elevated permissions to write to by using -i and -b flags for the installer).

AWS CodeDeploy not running hooks scripts

I'm learning how to use CodePipeline and have problem with CodeDeploy for small testing node app. My target is to implement CD for large express + react app and I need to use hooks from AppSpec.yml.
For now everything else is working, files are copied etc, it just doesn't fire script. I started with BeforeInstall (delete process from pm2) and ApplicationStart (start app with pm2) hooks, but now I switched to using ApplicationStart with script to remove process from pm2 just to see if it works.
My AppSpec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/api
permissions:
- object: /home/ubuntu/api/
owner: ubuntu
group: ubuntu
mode: "777"
# I use appStop.sh just to check if this works:
ApplicationStart:
- location: scripts/appStop.sh
runas: ubuntu
# I tried also running as root, still nothing
timeout: 60
appStop.sh:
#!/bin/bash
cd /home/ubuntu/api
pm2 delete 0
I tried many things, also running everything as root (though I prefer to use ubuntu user).
There are no ERRORs in log file in /var/log/aws/codedeploy-agent.
I can also see all files and scripts dir in reviev in /opt/codedeploy-agent/deployment-root/...
When I manually run appStop script in home dir it works.
It looks like CodeDeploy agent is just not running script.
Ok it seems I made it work.
First I cleaned codedeploy-agent data by removing /opt/deployment-root/<deployment droup id> dir and /opt/deployment-root/deployment-instructions
I also changed location, don't know if this helped, but had to do it since I decided to go with root user to make things easier. App is now in /var/www/api.
I also reinstalled all js software (node, pm2, npm) using sudo
My working AppSpec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/api
permissions:
- object: /var/www/api/
mode: 775
type:
- file
- directory
hooks:
ApplicationStop:
- location: scripts/appStop.sh
runas: root
ApplicationStart:
- location: scripts/appStart.sh
runas: root
and working scripts:
appStop.sh:
#!/bin/bash
cd /var/www/api
sudo pm2 delete 0
appStart.sh:
#!/bin/bash
cd /var/www/api
sudo pm2 start server.js

Laravel - configuration cache on elastic beanstalk

I have a Laravel application running on an Elastic Beanstalk environment.
Not having access to the database, S3 and SQS variables I did write a config on ebxtensions to copy some environment variables on the .env file during the deploy using the echo On a .sh hook file on post-deploy like this:
echo -e "AWS_BUCKET=$AWS_BUCKET" >> /var/app/current/.env
The .env file is correctly updated however another .sh hook that runs after that is completed that has the code:
php /var/app/current/artisan config:cache
And this saves the cached config file as the .env file was not updated yet.
Right now the config:cache command needs to be run manually after the deploy but I really want to make the process all automatic.
Any ideas why that happen?
Process of EB deploy is very interesting, take a look at /var/log/eb-activity.log
++ /opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir
+ EB_APP_DEPLOY_DIR=/var/app/current
+ '[' -d /var/app/current ']'
+ mv /var/app/current /var/app/current.old
+ mv /var/app/ondeck /var/app/current
+ nohup rm -rf /var/app/current.old
So, your config:cache running in previous environment which deleted after deploy.
You should use this post-hook in .ebextensions/01-post.config:
files:
/opt/elasticbeanstalk/hooks/appdeploy/post/01_create_cache.sh:
mode: "000755"
owner: root
group: root
content: |
php /var/app/current/artisan config:cache >>/var/log/artisan_test.log
But use it carefully! It takes variables only from .env, not from EB VARIABLES!
Right way will be collect all variables to .env and than generate config cache.
files:
/opt/elasticbeanstalk/hooks/appdeploy/post/01_create_cache.sh:
mode: "000755"
owner: root
group: root
content: |
source /opt/elasticbeanstalk/support/envvars && /usr/bin/php /var/www/html/artisan config:cache >>/var/log/artisan_test.log

App engine ignores symlinks to directories

I'm creating an app which runs on Google's App Engine with the custom flex environment. This app uses several (relative) symlinks which point to other directories in the project. But somehow those symlinks are ignored when I deploy the app.
It seems that the gcloud tool sends the source context (which is, all the files in my project) to the google container builder before building and deploying the app:
$ gcloud --project=my-project --verbosity=info app deploy
(...)
Beginning deployment of service [default]...
Building and pushing image for service [default]
INFO: Uploading [/tmp/tmpZ4Jha_/src.tgz] to [eu.gcr.io/my-project/appengine/default.20171212t160803:latest]
Started cloud build [some-uid].
If I extract the contents of the .tgz file I can see that all the files and directories in the project are there. Except for symlinks pointing to directories (symlinks to files are included though). So the source context is missing all the symlinks to directories.
Not using symlinks is not an option, so does anybody know how to include symlinks to directories in the source context send to google?
Although I don't think it's relevant, here are the contents of the app.yaml:
env: flex
runtime: custom
runtime_config:
document_root: docroot
manual_scaling:
instances: 1
resources:
cpu: 2
memory_gb: 2
disk_size_gb: 10
I've worked around this by deploying my python cloud functions from a temp directory, and using tar (on a Mac) to include files inside symlinked directories:
tar hc --exclude='__pycache__' {name} | tar x -C {tmpdirname}
I use a workaround solution similar to Steve Alexander's, but in a more elaborate way: I have a shell script that creates a temp dir, copies the dependencies into in, sets the environment and runs the gcloud command. It is basically something like this:
. .env.sh
SRC_FILE=$1
SRC_FUNC=$2
TRIGGER_RESOURCE=$3
TRIGGER_EVENT=$4
TMP_DIR=./tmp/deploy
mkdir -p $TMP_DIR
cp -r modules/dep1 $TMP_DIR
cp -r modules/dep2 $TMP_DIR
cp requirements.txt $TMP_DIR
cp $SRC_FILE $TMP_DIR/main.py
gcloud functions deploy $SRC_FUNC \
--source=$TMP_DIR \
--runtime=python39 \
--trigger-resource $TRIGGER_RESOURCE \
--trigger-event $TRIGGER_EVENT \
--env-vars-file=./.env.yml \
--timeout 540s
rm -rf $TMP_DIR
This script is tailored for a Google Storage event, ie. to deploy a function that should be triggered when a new file is uploaded to a bucket:
./deploy.func.sh functions.py gs_new_file_event project-bucket1 google.storage.object.finalize
So in the example above gs_new_file_event is a Python function defined in functions.py. The script copies the file with the Python code to the temp dir as main.py which is what the function deployer expects. This works well for a project where there are multiple cloud functions defined in the same repository that also contains dependencies and it is not possible to have all of the apps and functions defined in the top-level main.py. The script removes the temp dir after it is done, but it is a good idea to add the path to .gitingnore.
Here are a few things you can do to adapt the script to your own needs:
Set up the env files with all the required variables: .env.sh for the build and deployment, .env.yml for the function/app runtime.
Fix the paths and dependencies.
Improve the handling of the command line arguments to make it more flexible and work for all kinds of GCloud triggers.

While creating Mean.io Project I get the error

C:\Users\Kashif\Desktop\Mean-io>mean init myAPp
? What would you name your mean app? myAPp
On windows platform - Please check permissions independently
All permissions should be run with the local users permissions
Cloning branch: master into destination folder: myAPp
git clone --depth 1 -bmaster https://github.com/linnovate/mean.git "myAPp"
FIND: Parameter format not correct
There are 2 files in your ~/.npm owned by root
Please change the permissions by running - chown -R `whoami` ~/.npm
C:\Users\Kashif\AppData\Roaming\npm\node_modules\mean-cli\lib\utils.js:67
throw('ROOT PERMISSIONS IN NPM');
^
ROOT PERMISSIONS IN NPM
I also tried with ROOT access but the error is same.
I had the same problem today, and I've found out that utils.js uses the following command to find files owned by the root:
var findCmd = 'find ' + homeDir +'/.npm ' + '-user root';
which obviously doesn't work on Windows.
A work around for this is to make sure that you have the right permission on the directory and comment out shell.exec().
Cheers.

Resources