AWS CLI S3 CP --recursive function works in console but not in .sh file - linux

I have the following line inside of a .sh file:
aws s3 cp s3://bucket/folder/ /home/ec2-user/ --recursive
When I run this line in the console, it runs fine and completes as expected. When I run this line inside of a .sh file, the line returns the following error.
Unknown options: --recursive
Here is my full .sh script.
echo "Test"
aws s3 cp s3://bucket/folder/ /home/ec2-user/ --recursive
echo "Test"
python /home/ec2-user/Run.py
If I manually add the Run.py file (not having it copied over from S3 as desired) and run the script I get the following output.
Test
Unknown options: --recursive
Test
Hello World
If I remove the files which are expected to be transferred by S3 out of the Linux environment and rely on the AWS S3 command I get the following output:
Test
Unknown options: --recursive
Test
python: can't open file '/home/ec2-user/Run.py': [Errno 2] No such file or directory
Note that there are multiple files that can be transferred inside of the specified S3 location. All of these files are transferred as expected when running the AWS command from the console.
I initially thought this was a line ending error, as I am copying the .sh file over from Windows. I have made sure that my line endings are working and we see this by the rest of the script running as expected. As a result, this issue seems isolated to the actual AWS command. If I remove the --recursive from the call, it will transfer over a single file successfully.
Any ideas on why the --recursive option would be working in the console but not in the .sh file?
P.s.
$ aws --version
aws-cli/2.1.15 Python/3.7.3 Linux/4.14.209-160.339.amzn2.x86_64 exe/x86_64.amzn.2 prompt/off

I think it is about the position of the option or you have somewhere the AWS CLI path which is pointing to older version because recursive option was added later. For me, it works both ways inside the shell script as well as the console.
aws s3 cp --recursive s3://bucket/folder/ /home/ec2-user/
$ aws --version
aws-cli/2.1.6 Python/3.7.4 Darwin/20.2.0 exe/x86_64 prompt/off
Try printing the version inside the shell script.
cp --recursive method lists source path and copies (overwrites) all to the destination path.
Plus instead of doing recursive use sync.sync recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files.
aws s3 sync 3://bucket/folder/ /home/ec2-user/
sync method first lists both source and destination paths and copies only differences (name, size etc.).

Related

Skipping file /opt/atlassian/pipelines/agent/build/. File does not exist

I have a file in my root directory catalog.json. I use a bitbucket pipeline to perform this step:
- aws s3 cp catalog.json s3://testunzipping/ --recursive
However, I get an error that:
Skipping file /opt/atlassian/pipelines/agent/build/catalog.json/. File does not exist.
Why is it checking for the catalog.json file in this path? Why is not extracting the file from the root? How can I modify the command accordingly?
The problem is that you are trying to use --recursive for a single file. The command is trying to evaluate it as a directory because of the --recursive parameter. And it's ending up giving File does not exist error. I reproduced it by trying to copy a single file with --recursive parameter, I got the same error, but after removing it, it worked.
You can use this ;
aws s3 cp catalog.json s3://testunzipping/
Also for the answer of your other question;
Bitbucket Pipelines runners are using /opt/atlassian/pipelines/agent/build directory. This is actually pipelines' root directory. Runners are pulling code from the repo there and processing it according to your pipeline structure. It's a built-in situation.

unable to copy file from local machine to ec2 instance in ansible playbook

So I am running scp -i ~/Downloads/ansible-benchmark.pem ~/Documents/cis-playbook/section-1.yaml ubuntu#ec2-18-170-77-90.eu-west-2.compute.amazonaws.com:~/etc/ansible/playbooks/
to transfer an ansible playbook I created with VSCODE the section-1.yaml file,
but I am coming up with an error scp: /home/ubuntu/etc/ansible/playbooks/: No such file or directory
the directory definitely exists in the ec2 instance, I did install ansible, but for some reason I don't know why it isn't recognising the directory.
For the first one, you can check if is available
/home/ubuntu/etc/ansible/playbooks/
if that part is not available on target/source, you can use create a folder on ansible first then you can go for copy on target
You can use this issue Ansible: find file and loop over paths

nodejs run cd and zip command always failed

all. I ran into a strange situation.
os: macOS
I can run zip -r test.zip * in the console. Then I try to do this in a script with nodejs and a plugin called 'webpackshellplugin-next'. I import this plugin and then call it to run script when the build process exits. I try to zip some files in the ./build/ folder, just something like the screenshot.
I failed in the 2nd step, to cd ./build && ..., because I see the console shows no such file or directory when it tries to cp ./build/xxx.mpk to other directory. I check the ./build directory, the xxx.mpk file is not generated.
Can I examine why this script failed?

Google Cloud Platform API for Python and AWS Lambda Incompatibility: Cannot import name 'cygrpc'

I am trying to use Google Cloud Platform (specifically, the Vision API) for Python with AWS Lambda. Thus, I have to create a deployment package for my dependencies. However, when I try to create this deployment package, I get several compilation errors, regardless of the version of Python (3.6 or 2.7). Considering the version 3.6, I get the issue "Cannot import name 'cygrpc'". For 2.7, I get some unknown error with the .path file. I am following the AWS Lambda Deployment Package instructions here. They recommend two options, and both do not work / result in the same issue. Is GCP just not compatible with AWS Lambda for some reason? What's the deal?
Neither Python 3.6 nor 2.7 work for me.
NOTE: I am posting this question here to answer it myself because it took me quite a while to find a solution, and I would like to share my solution.
TL;DR: You cannot compile the deployment package on your Mac or whatever pc you use. You have to do it using a specific OS/"setup", the same one that AWS Lambda uses to run your code. To do this, you have to use EC2.
I will provide here an answer on how to get Google Cloud Vision working on AWS Lambda for Python 2.7. This answer is potentially extendable for other other APIs and other programming languages on AWS Lambda.
So the my journey to a solution began with this initial posting on Github with others who have the same issue. One solution someone posted was
I had the same issue " cannot import name 'cygrpc' " while running
the lambda. Solved it with pip install google-cloud-vision in the AMI
amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 instance and exported the
lib/python3.6/site-packages to aws lambda Thank you #tseaver
This is partially correct, unless I read it wrong, but regardless it led me on the right path. You will have to use EC2. Here are the steps I took:
Set up an EC2 instance by going to EC2 on Amazon. Do a quick read about AWS EC2 if you have not already. Set one up for amzn-ami-hvm-2018.03.0.20180811-x86_64-gp2 or something along those lines (i.e. the most updated one).
Get your EC2 .pem file. Go to your Terminal. cd into your folder where your .pem file is. ssh into your instance using
ssh -i "your-file-name-here.pem" ec2-user#ec2-ip-address-here.compute-1.amazonaws.com
Create the following folders on your instance using mkdir: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.
On your EC2 instance, cd into google-cloud-vision. Run the command:
pip install google-cloud-vision -t .
Note If you get "bash: pip: command not found", then enter "sudo easy_install pip" source.
Repeat step 4 with the following packages, while cd'ing into the respective folder: protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2.
Copy each folder on your computer. You can do this using the scp command. Again, in your Terminal, not your EC2 instance and not the Terminal window you used to access your EC2 instance, run the command (below is an example for your "google-cloud-vision" folder, but repeat this with every folder):
sudo scp -r -i your-pem-file-name.pem ec2-user#ec2-ip-address-here.compute-1.amazonaws.com:~/google-cloud-vision ~/Documents/your-local-directory/
Stop your EC2 instance from the AWS console so you don't get overcharged.
For your deployment package, you will need a single folder containing all your modules and your Python scripts. To begin combining all of the modules, create an empty folder titled "modules." Copy and paste all of the contents of the "google-cloud-vision" folder into the "modules" folder. Now place only the folder titled "protobuf" from the "protobuf" (sic) main folder in the "Google" folder of the "modules" folder. Also from the "protobuf" main folder, paste the Protobuf .pth file and the -info folder in the Google folder.
For each module after protobuf, copy and paste in the "modules" folder the folder titled with the module name, the .pth file, and the "-info" folder.
You now have all of your modules properly combined (almost). To finish combination, remove these two files from your "modules" folder: googleapis_common_protos-1.5.3-nspkg.pth and google_cloud_vision-0.34.0-py3.6-nspkg.pth. Copy and paste everything in the "modules" folder into your deployment package folder. Also, if you're using GCP, paste in your .json file for your credentials as well.
Finally, put your Python scripts in this folder, zip the contents (not the folder), upload to S3, and paste the link in your AWS Lambda function and get going!
If something here doesn't work as described, please forgive me and either message me or feel free to edit my answer. Hope this helps.
Building off the answer from #Josh Wolff (thanks a lot, btw!), this can be streamlined a bit by using a Docker image for Lambdas that Amazon makes available.
You can either bundle the libraries with your project source or, as I did below in a Makefile script, upload it as an AWS layer.
layer:
set -e ;\
docker run -v "$(PWD)/src":/var/task "lambci/lambda:build-python3.6" /bin/sh -c "rm -R python; pip install -r requirements.txt -t python/lib/python3.6/site-packages/; exit" ;\
pushd src ;\
zip -r my_lambda_layer.zip python > /dev/null ;\
rm -R python ;\
aws lambda publish-layer-version --layer-name my_lambda_layer --description "Lambda layer" --zip-file fileb://my_lambda_layer.zip --compatible-runtimes "python3.6" ;\
rm my_lambda_layer.zip ;\
popd ;
The above script will:
Pull the Docker image if you don't have it yet (above uses Python 3.6)
Delete the python directory (only useful for running a second
time)
Install all requirements to the python directory, created in your projects /src directory
ZIP the python directory
Upload the AWS layer
Delete the python directory and zip file
Make sure your requirements.txt file includes the modules listed above by Josh: google-cloud-vision, protobuf, google-api-python-client, httplib2, uritemplate, google-auth-httplib2
There's a fast solution that doesn't require much coding.
Cloud9 uses AMI so using pip on their virtual environment should make it work.
I created a Lambda from the Cloud9 UI and from the console activated the venv for the EC2 machine. I proceeded to install google-cloud-speech with pip.That was enough to fix the issue.
I was facing same error using goolge-ads API.
{
"errorMessage": "Unable to import module 'lambda_function': cannot import name'cygrpc' from 'grpc._cython' (/var/task/grpc/_cython/init.py)","errorType": "Runtime.ImportModuleError","stackTrace": []}
My Lambda runtime was Python 3.9 and architecture x86_64.
If somebody encounter similar ImportModuleError then see my answer here : Cannot import name 'cygrpc' from 'grpc._cython' - Google Ads API

Downloading from s3 bucket fails while running the s3cmd get from cron job

I am running a script to download files from s3 bucket. Running the script in cron. At times, the script fails , but when i run it manually it always works.
Can anyone help me with this.
It appears that your requirement is to download all new files from Amazon S3, so that you have a local copy of all files (without downloading them repeatedly).
I would recommend using the AWS Command-Line Interface (CLI), which has an aws s3 sync command. This will synchronize the files from Amazon S3 to your local directory (or the other way). If something goes wrong, it will try to copy the files again on the next sync.

Resources