How can I backup Google Drive into AWS Glacier? - linux

I want to backup whatever new file or folder added to my Google Drive into AWS Glacier through a linux instance running in an EC2 instance.
I have gone through some AWS Glacier clients, but they are for uploading files from and downloading to local system.
https://www.cloudwards.net/best-backup-tools-amazon-glacier/

Rclone may able to help you. Rclone is a command line program to sync files and directories to and from
Google Drive
Amazon S3
Openstack Swift / Rackspace cloud files / Memset Memstore
Dropbox
Google Cloud Storage
Amazon Drive
Microsoft OneDrive
Hubic
Backblaze B2
Yandex Disk
SFTP
The local filesystem
https://github.com/ncw/rclone

Writing the steps (may be helpful to someone)
We need to create remotes for Google Drive and Amazon S3
I'm using Ubuntu server on AWS EC2 instance.
Download appropriate file from https://rclone.org/downloads/ - Linux ARM - 64 Bit (in my case)
Copy the downloaded file from local to server (using scp command) and extract the file. OR extract the file on local itself and copy the extracted files to the server (because I was facing problem in extracting it on server)
ssh into the ubuntu server.
Go inside the folder - rclone-v1.36-linux-amd64 (in my case)
Execute the following commands:
Copy binary file
$ sudo cp rclone /usr/bin/
$ sudo chown root:root /usr/bin/rclone
$ sudo chmod 755 /usr/bin/rclone
Install manpage
$ sudo mkdir -p /usr/local/share/man/man1
$ sudo cp rclone.1 /usr/local/share/man/man1/
$ sudo mandb
Run rclone config to setup. See rclone config docs for more details.
$ rclone config
After executing rcolne config command, choose the number/alphabet of option you want to select. Once reached to Use auto config? part, enter N (as we are working on remote server)
Paste the link you got in local browser, copy the verification code and enter the the code in the terminal.
Confirm, by entering y
Enter n to create another remote for Amazon S3, and repeat the same procedure.
Use the following links for various rclone commands and options:
https://rclone.org/docs/
https://linoxide.com/file-system/configure-rclone-linux-sync-cloud/

Related

Copying files from a linux machine to an aws ec2 instance

I want to write a jenkins pipeline in which at a particular step i have to copy few zip files from a different linux machine. The pipeline will be running on an AWS EC2 agent.
I have to copy the zip files from linux machine to AWS EC2 instance.
i tried using few ways to handle this using curl and scp but not able to achieve it. Is there a better way to achieve it.
With curl : i am facing connection reset by peer error. Please help
I would use scp for this task. Here's an example of me copying over a file called foo.sh to the remote host:
scp -i mykey.pem foo.sh "ec2-user#ec2-123-123-123-123.compute-1.amazonaws.com:/usr/tmp/foo.sh"
in the example:
mykey.pem is my .pem file
foo.sh is the file I want to copy across
ec2-user the user on the host
123-123-123-123 the (fake) public ip address of the host
/usr/tmp/foo.sh the location where I want the file to be

Access shared folder using a specific windows credential

I'm currently working with a requirement: download a file from the database then write it to a shared folder. Temporarily, I'm working on a path on my local:
File.WriteAllBytes(path, content);
My problem is the shared folder is on a windows machine and only a specific account will be allowed to write to this folder.
Now I know the basics of Impersonation but I don't know if it is possible to impersonate on a Docker container on a Linux machine.
In short, I want to deploy my application on a Linux container then write a file to a windows shared folder with limited access.
Is the folder on the host or mounted on the host? If so you can then map the host folder to the container. e.g.
C:\> "Hello" > c:\temp\testfile.txt
C:\> docker run -v c:/temp:/tmp busybox cat /tmp/testfile.txt
c:/temp being a local path on the host
/tmp being the path in the container.
More details here: volume-shared-filesystems

How to copy files from Amazon EFS to my local machine with shell script?

I have a question regarding file transfer from Amazon efs to my local machine with a simple shell script. The manual procedure I follow is:
Copy the file from efs to my Amazon ec2 instance using sudo cp
Copy from ec2 to my local machine using scp or FileZilla (drag and drop)
Is there a way it can be done running a shell script in which I give two inputs: source file address and save destination directory?
Can two steps be reduced to one i.e. directly copying from efs to local machine?
You should be able to mount to the local machine and access the remote file system locally on your machine.
http://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
With mounting, you can access the file locally with your machine resources to edit the remote files.
While SCP can work, you need to keep them in sync all the time between your local and remote.
Hope it helps.

How to copy my war file in tomcat7 webapps in linux

Hi I have my tomcat7 installed in amazon cloud at location /usr/sbin/tomcat7 and my war file (xyz.war) is in downloads folder. How can I copy my war file into my webapps folder.
I'm new bee to Linux to may be this is very simple but I'm having a hard time with it. can some please come up and give me some example.
Thanks in advance.
The following is assuming you are using windows
1) Get WinSCP for transferring files to your instance
2) With WinSCP, transfer the war file (assuming the file name is sample.war) to the following location:
/home/ec2-user/sample.war
3) Using Putty, enter the following command
sudo -s (for root access)
cp /home/ec2-user/sample.war /var/lib/tomcat7/webapps
4) Start / Restart your tomcat with the following command
sudo service tomcat7 start (to start)
sudo service tomcat7 restart (to restart, if your tomcat has already started)
5) Verify that it has been uploaded at the following location
http://instanceURL:8080/sample
The easiest way to deploy your .war file to your Tomcat7 application server would be to use SCP to securely transfer the file to the Amazon server you have on AWS EC2. You can find more info on SCP over at Using scp to copy a file to Amazon EC2 instance?

Amazon AWS s3fs mount problem on Fedora 14

I successfully compiled and installed s3fs (http://code.google.com/p/s3fs/) on my Fedora 14 machine. I included the password credentials in /etc/ as specified in the guide. When I run:
sudo /usr/bin/s3fs bucket_name /mnt/bucket_name/
it runs successfully. (note: the bucket name is the same as the folder name in /mnt/). When I run ls in /mnt/ I get the error "ls: cannot access bucket_name: Permission denied". When I run
sudo chmod 640 /mnt/bucket_name
I get "chmod: changing permissions of `bucket_name': Input/output error". When I reboot the machine I can access the folder /mnt/bucket_name normally but it is not mapped to the s3 bucket.
So, basically I have two questions. 1) How do I access the folder (/mnt/bucket_name) as usual after I mount it to the s3 bucket and 2) How can I keep it mounted even after machine restart.
Regards
Try adding allow_other to your command, this fixed it for me.
/usr/bin/s3fs -o allow_other mybucketname mymountpoint
in amazon s3, bucket names are 'global' to all s3 users, so, be sure that the bucket name that you're using is your bucket
furthermore, need to create the bucket first with another s3 tool
to keep it mounted after machine restart, stitch it into /etc/fstab as per http://code.google.com/p/s3fs/wiki/FuseOverAmazon (search for 'fstab' in the comments)

Resources