gitlab container scanner can't install aws-cli - gitlab

In the gitlab CI docs (https://docs.gitlab.com/ee/user/application_security/container_scanning/), it states you can scan ECR using the following:
container_scanning:
before_script:
- ruby -r open-uri -e "IO.copy_stream(URI.open('https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip'), 'awscliv2.zip')"
- unzip awscliv2.zip
- ./aws/install
- aws --version
- export AWS_ECR_PASSWORD=$(aws ecr get-login-password --region region)
include:
- template: Security/Container-Scanning.gitlab-ci.yml
DOCKER_IMAGE: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<image>:<tag>
DOCKER_USER: AWS
DOCKER_PASSWORD: "$AWS_ECR_PASSWORD"
When I add the "before_script", i get the following:
inflating: aws/dist/cryptography-3.3.2-py3.9.egg-info/LICENSE
inflating: aws/dist/cryptography-3.3.2-py3.9.egg-info/WHEEL
creating: aws/dist/cryptography/hazmat/
creating: aws/dist/cryptography/hazmat/bindings/
inflating: aws/dist/cryptography/hazmat/bindings/_openssl.abi3.so
$ ./aws/install
mkdir: cannot create directory ‘/usr/local/aws-cli’: Permission denied
Uploading artifacts for failed job
00:00
Uploading artifacts...
WARNING: gl-container-scanning-report.json: no matching files
seems it doesn't have permissions. Is there another way to get it to work? Thanks!

The container_scanning job (by default) uses the docker image registry.gitlab.com/security-products/container-scanning:4
You can also see this image specifies its user as gitlab, which implies to me that the user in the image, unlike most images you might traditionally use, does not have root privileges by default.
This user will, therefore, not have permission to write to /usr/local/
You can probably work around this by using sudo
- sudo ./aws/install
(or as you stated, you can direct the installation to another location that doesn't require elevated permissions to write to by using -i and -b flags for the installer).

Related

gitlab API to download archive file in git gives bad file but good when called from local machine

I'm trying to retrieve a build file using the gitlab API. This file was created and stored as an artifact from an upstream pipeline. Running
curl -o download --location --header 'PRIVATE-TOKEN:{MY_API_TOKEN}' https://gitlab.foo.com/api/v4/projects/{PROJECT_ID}/jobs/artifacts/{REF_BRANCH}/download?job={JOB_NAME}
on my local machine gives me a proper build file once I run unzip download. However in the runner, the same command returns a much smaller file which I can't unzip. I've checked that the environment variables that are passed in the runner are right.
job in .gitlab-ci.yml
deploy_production_environment:
stage: deploy_prod
image:
name: banst/awscli
script:
- apk --no-cache add curl
- apk add unzip
- echo $JOB_ID
- echo $FE_BUILD_TOKEN
- echo "https://gitlab.foo.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/${CI_COMMIT_REF_NAME}/download?job=build_prod"
- aws configure set region us-east-1
- "curl -o download --location --header 'PRIVATE-TOKEN:${FE_BUILD_TOKEN}' https://gitlab.foo.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/${CI_COMMIT_REF_NAME}/download?job=build_prod"
- ls -l
- unzip download
- aws s3 cp build s3://$S3_BUCKET_PROD --recursive
gitlab job output:
`
output from my local terminal:
Why does the API call from inside the runner consistently result in this much smaller (corrupted?) file while the same call pulls the zip file down correctly on my local machine?
The first check to do when a curl brings back a "small" file it to read its content.
Often, the file is not so much corrupted but includes a text-based error message in it, which can give a clue as to the actual issue.
Adding -v to the curl command can also help illustrating the issue during the curl process (when executed in the context of the GitLab job).
Thank you to VonC for the debugging help, recommending the -v flag to the curl command. It turns out that the single quotes around 'PRIVATE-TOKEN:${FE_BUILD_TOKEN}' prevented the variable from being parsed to its correct string value which was giving a 401 'Permission Denied' error. Removing the single quotes did the trick.

App engine ignores symlinks to directories

I'm creating an app which runs on Google's App Engine with the custom flex environment. This app uses several (relative) symlinks which point to other directories in the project. But somehow those symlinks are ignored when I deploy the app.
It seems that the gcloud tool sends the source context (which is, all the files in my project) to the google container builder before building and deploying the app:
$ gcloud --project=my-project --verbosity=info app deploy
(...)
Beginning deployment of service [default]...
Building and pushing image for service [default]
INFO: Uploading [/tmp/tmpZ4Jha_/src.tgz] to [eu.gcr.io/my-project/appengine/default.20171212t160803:latest]
Started cloud build [some-uid].
If I extract the contents of the .tgz file I can see that all the files and directories in the project are there. Except for symlinks pointing to directories (symlinks to files are included though). So the source context is missing all the symlinks to directories.
Not using symlinks is not an option, so does anybody know how to include symlinks to directories in the source context send to google?
Although I don't think it's relevant, here are the contents of the app.yaml:
env: flex
runtime: custom
runtime_config:
document_root: docroot
manual_scaling:
instances: 1
resources:
cpu: 2
memory_gb: 2
disk_size_gb: 10
I've worked around this by deploying my python cloud functions from a temp directory, and using tar (on a Mac) to include files inside symlinked directories:
tar hc --exclude='__pycache__' {name} | tar x -C {tmpdirname}
I use a workaround solution similar to Steve Alexander's, but in a more elaborate way: I have a shell script that creates a temp dir, copies the dependencies into in, sets the environment and runs the gcloud command. It is basically something like this:
. .env.sh
SRC_FILE=$1
SRC_FUNC=$2
TRIGGER_RESOURCE=$3
TRIGGER_EVENT=$4
TMP_DIR=./tmp/deploy
mkdir -p $TMP_DIR
cp -r modules/dep1 $TMP_DIR
cp -r modules/dep2 $TMP_DIR
cp requirements.txt $TMP_DIR
cp $SRC_FILE $TMP_DIR/main.py
gcloud functions deploy $SRC_FUNC \
--source=$TMP_DIR \
--runtime=python39 \
--trigger-resource $TRIGGER_RESOURCE \
--trigger-event $TRIGGER_EVENT \
--env-vars-file=./.env.yml \
--timeout 540s
rm -rf $TMP_DIR
This script is tailored for a Google Storage event, ie. to deploy a function that should be triggered when a new file is uploaded to a bucket:
./deploy.func.sh functions.py gs_new_file_event project-bucket1 google.storage.object.finalize
So in the example above gs_new_file_event is a Python function defined in functions.py. The script copies the file with the Python code to the temp dir as main.py which is what the function deployer expects. This works well for a project where there are multiple cloud functions defined in the same repository that also contains dependencies and it is not possible to have all of the apps and functions defined in the top-level main.py. The script removes the temp dir after it is done, but it is a good idea to add the path to .gitingnore.
Here are a few things you can do to adapt the script to your own needs:
Set up the env files with all the required variables: .env.sh for the build and deployment, .env.yml for the function/app runtime.
Fix the paths and dependencies.
Improve the handling of the command line arguments to make it more flexible and work for all kinds of GCloud triggers.

Haskell installation in docker container using stack failing: too many open files

I have a simple Dockerfile
FROM haskell:8
WORKDIR "/root"
CMD ["/bin/bash"]
which I run mounting pwd folder to "/root". In my current folder I have a Haskell project that uses stack (funblog). I configured in stack.yml to use "lts-7.20" resolver, which aims to install ghc-8.0.1.
Inside the container, after running "stack update", I ran "stack setup" but I am getting "Too many open files in system" during GHC compilation.
This is my stack.yaml
flags: {}
packages:
- '.'
- location:
git: https://github.com/agrafix/Spock.git
commit: 2c60a48b2c0be0768071cc1b3c7f14590ffcc7d6
subdirs:
- Spock
- Spock-core
- reroute
- location:
git: https://github.com/agrafix/Spock-digestive.git
commit: 4c85647427e21bbaefbf04c4bc315d4bdfabba0e
extra-deps:
- digestive-bootstrap-0.1.0.1
- blaze-bootstrap-0.1.0.1
- digestive-functors-blaze-0.6.0.6
resolver: lts-7.20
One import note: I don't want to use Docker to deploy the app, just to compile it, i.e. as part of my dev process.
Any ideas?
Should I use another image without ghc pre-installed to use with docker? Which one?
update
Yes, I could use the built-in GHC in the container and it is a good idea, but wondered if there is any issue building GHC within Docker.
update 2
For anyone wishing to reproduce (on MAC OSX by the way), you can clone repo https://github.com/carlosayam/funblog and grab this commit 9446bc0e52574cc574a9eb5f2733f69e07b874ef
(I will probably move on using container's GHC)
By default, Docker for macOS limits number of file descriptors to avoid hitting macOS system-wide limits (default limit is 900). To increase the limit, follow these commands:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at 9410b78 last-start-time changed at 1480947038
$ cat com.docker.driver.amd64-linux/slirp/max-connections
900
$ echo 1200 > com.docker.driver.amd64-linux/slirp/max-connections
$ git add com.docker.driver.amd64-linux/slirp/max-connections
$ git commit -s -m 'Update the maximum number of connections'
[master 227a248] Update the maximum number of connections
1 file changed, 1 insertion(+), 1 deletion(-)
Then check the notice messages by:
$ syslog -k Sender Docker
<Notice>: updating connection limit to 1200
To check how many files you got open, run: sysctl kern.num_files.
To check what's your current limit, run: sysctl kern.maxfiles.
To increase it system-wide, run: sysctl -w kern.maxfiles=20480.
Source: Containers become unresponsive due to "too many connections".
See also: Docker: How to increase number of open files limit.
On Linux, you can also try to run Docker with --ulimit, e.g.
docker run --ulimit nofile=5000:5000 <image-tag>
Source: Docker error: too many open files

Hexo deploy on github

I tried to deploy the Hexo on my GithubPage.
The generate process looks fine, but error happens when I deploy it on my GithubPage.
Here's the deployment part in _config.yml:
# Deployment
## Docs: https://hexo.io/docs/deployment.html
deploy:
type: git
repo: https://github.com/ZhangYuef/ZhangYuef.github.io.git
# branch: Hexo
Generate
Deployment
So what's going on there?
Thx for help! :)
The context you provided in the question is not sufficient...
But according to invalid chars on the screenshot, I suppose that your Chinese file path may be the cause.
References:
Node JS Error: ENOENT
Why does ENOENT mean "No such file or directory"?
try to update the _config.yml like this:
deploy:
type: git
repository: https://github.com/fakeYanss/fakeYanss.github.io.git
branch: master
yaml is very very very strict, and indent is important!
Not sure what reason causing this error.
Check your environment whether these things have been set up.
I think it might be your config type is wrong.
npm install hexo-deployer-git --save
git repository settings like
deploy:
- type: git
repo: git#github.com:xxx.git
branch: master
- type: git
repo: git#github.com:xxx.git
branch: src
extend_dirs: /
ignore_hidden: false
ignore_pattern:
public: .
By this way, you can not only deploy your blog, but also backup your blog files, which you can use the command git pull to get the blog files on another machine.
- set up your ssh
ssh-keygen -t rsa -C "yourEmail#icloud.com"
ssh-agent -s
chmod id_rsa 600
ssh-add id_rsa
(you need to add the id_rsa.pub to the github's deployer key)
ssh -T git#github.com
sometimes it maybe you have several gits, make deployer confused.
Try delete .git directory and make sure there is no any git in other directories.
encoding. It could be the encoding is different. In my case, I make all the files belong to UTF-8.
By the way, it could be your files' error.try npm install hexo-server --save and hexo server to detect whether the website can be deployed.
(http://localhost:4000/xx)

AWS Linux CodeDeploy Permission Issues (w. Bitbucket, Tomcat, Shell Script)

I'm trying to deploy files using CodeDeploy to my AWS Beanstalk server with Tomcat installed. Everything is well configured except for an exception which occurs when appspec.yml calls my .sh script and mvn install command is executed. I've tried all combinations of permissions I've imagined (as well as every StackOverflow answer I've found), but nothing has worked.
Cannot create resource output directory: /opt/codedeploy-agent/deployment-root/f953d455-9712-454b-84b0-2533cf87f79a/d-3UFCDLD0D/deployment-archive/target/classes
I also expected the files section of appspec.yml to get executed before the .sh script gets executed. It should have been working like this:
appspec.yml moves all files to webapps folder
build.sh gets executed
mvn runs and creates the .war file
build.sh does some cleaning up
appspec.yml (I've tried multiple other)
version: 0.0
os: linux
files:
- source: /
destination: /var/lib/tomcat8/webapps
permissions:
- object: /opt/codedeploy-agent/deployment-root
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
- object: /var/lib/tomcat8/webapps
pattern: "**"
owner: ec2-user
group: root
mode: 755
type:
- directory
hooks:
BeforeInstall:
- location: scripts/build.sh
runas: ec2-user
build.sh
export LANG=en_US.UTF-8
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo "Script path: $SCRIPTPATH"
PROJECT_SOURCE_DIR=$SCRIPTPATH/../
cd $PROJECT_SOURCE_DIR
mvn clean install
cd $PROJECT_SOURCE_DIR/target
ls -a
for file in *.war; do
mv $file /usr/share/tomcat8/webapps/ROOT.war
done;
rm -rf $PROJECT_SOURCE_DIR/target
rm -rf $SCRIPTPATH
It's obvious from the exception that maven tries to create a folder target without having the permissions. So the questions are why on the first place it's trying to execute it in this folder and then how to gain proper access.
The way to solve the problem is to add command to change to proper directory before run "mvn clean install" instead of PROJECT_SOURCE_DIR.
Install is the lifecycle event that AWS CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts. The related doc is here: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
The directory that you are getting error is actually under the deployment archive directory as showing here: https://github.com/aws/aws-codedeploy-agent/blob/master/lib/instance_agent/plugins/codedeploy/hook_executor.rb#L174
The reason you got the error is because the build.sh script is running at the current directory which needs root privilege and scripts/build.sh only has ex2-user privilege, which caused the permission issue.

Resources