Downloading from s3 bucket fails while running the s3cmd get from cron job - linux

I am running a script to download files from s3 bucket. Running the script in cron. At times, the script fails , but when i run it manually it always works.
Can anyone help me with this.

It appears that your requirement is to download all new files from Amazon S3, so that you have a local copy of all files (without downloading them repeatedly).
I would recommend using the AWS Command-Line Interface (CLI), which has an aws s3 sync command. This will synchronize the files from Amazon S3 to your local directory (or the other way). If something goes wrong, it will try to copy the files again on the next sync.

Related

Powershell Compress-Archive not publishing Node.js AWS lambda layer correctly

I work at a company that deploys Node.js and C# AWS lambda functions. I work on a windows machine. Our azure pipeline build environment is also windows environment.
I wrote a powershell script that packages lambda functions and layers as zip files and publishes them to AWS. My issue is deploying node.js lambda layers.
When I use Compress-Archive powershell command to zip the layer files it is preserving the windows \ in file paths. When this gets unzipped in AWS it is expecting / in file paths. So the file structure is incorrect for a node.js runtime and my lambda function that uses layer cannot find needed modules.
One way I made this work from my local machine is to install 7zip utility to zip the files. It seems it zips the files with / file paths and this works correctly when unzipped for lambda layer using node.js runtime. But when I use this powershell script in azure pipeline I cannot install 7zip utility on the build server.
Is there a way to zip files with / in file paths instead of \ that does not require to use a third party utility?
Compress-Archive doesn't keep folder structure and more details and workarounds you can find here. But apart from that you can use Archive Files task (link here), or install 7zip using chocolatey choco install 7zip.install -y.

Excluding the '#recycle' directory from s3cmd upload

I'm using s3cmd on a Synology NAS.
I built an exclusion file including #recycle/* and the --exclude-from=/path/to/exc/file option but it doesn't work.
I already tried this: '#recycle/*', "#recycle/*" and "\#recycle/*" but s3cmd still tries to upload the '#recycle' folder contents.
I have an error when trying to using both --exclude from=/path/to/file and -exclude='#recycle/*' in the same command.
Any ideas?

How to compress/zip files in Microsoft Azure App Service console?

I know about the KUDU service in Microsoft Azure and it works great. However I have got very large data, which is greater than 3GB and it takes very long time to download and then upload to my new server. Is there a way to zip the data on Azure through command line and then do wget to download this data on new server. I have been doing this manually till now but it is taking it forever to download it to my PC first and then uploading to the server through FTP.
I am logged into Microsoft Azure App Service Console. I have tried compress, Compress-Archive and even zip command but nothing works. It says that famous internal and external command not found message.
'compress' is not recognized as an internal or external command, operable program or batch file.
How could I compress these file on Azure console? Earliest help would be appreciated.
Or is there a way to install some compression tool on this server through command line?
Now Windows and Kudu both support a native tar command, why not use that.
tar -cvzf my_archive.tar.gz input_dir
tar -xf my_archive.tar.gz
While the unzip utility is available, there's no zip tool. One way around that is to upload the command line version of 7-Zip, it's a standalone .EXE file.

upload and install to aws server

I have a test server that checks out my source from github and deploys it locally to my test server.
Now I have it running and tested. I want to upload that working directory to AWS server's, how do I do this?
I have access to AWS via putty.
Then when I have it all on the aws server, can I install it as I would on any other ubuntu server?
There are many ways to do this:
Make a tar ball, scp to your server, untar and install
Use Ansible to checkout code on your server
Best option: Use AWS Code Deploy See this Using AWS CodeDeploy to Deploy an Application from GitHub
Another option, which worked best for me.
Is create an AWS instance, log into it.
Set that up as you would any other server.
then you can save the image and use as a base in future.
I used Ubuntu, but there are numerous AWS instances available.

EC2 runs out of storage, which logs etc. can be deleted?

I am having a problem with an EC2 which uses nearly all of the 8GB of storage. I regularly delete log files from the server and files which get created by cronjobs in the users folder (can I turn this off?), but in the past there is always less space after deleting all the files. So somewhere else the EC2 creates files, but I don't know where.
Does anybody know where I can look for unused files automatically created by the Amazon Linux AMI or apache?
Thanks!
Look in /var/log, and for Apache they will likely be in /var/log/httpd. I also suggest that you look into logrotate.
You can clear all log files from your ec2 Ubuntu instance using this command
sudo rm -rf /opt/bitnami/apache2/logs/

Resources