Using the Cyberduck --copy in Cyberduck CLI - cyberduck

I am trying to copy wordPress files between servers with --copy
It is however unclear to me which URL should be used as the origin and which as the destination?
Any help would be greatly appreciated.
Francois Wessels

According to Scripting cloud storage copy using the command line interface (CLI),
Consider you want to copy files from Amazon S3 to Rackspace CloudFiles running OpenStack Swift, all you need is the command
duck --copy s3://<Access Key ID>#<bucket>/ rackspace://<Username>#<container>/
to copy all files.
So that would be "from" "to".

Related

How to copy files from Amazon EFS to my local machine with shell script?

I have a question regarding file transfer from Amazon efs to my local machine with a simple shell script. The manual procedure I follow is:
Copy the file from efs to my Amazon ec2 instance using sudo cp
Copy from ec2 to my local machine using scp or FileZilla (drag and drop)
Is there a way it can be done running a shell script in which I give two inputs: source file address and save destination directory?
Can two steps be reduced to one i.e. directly copying from efs to local machine?
You should be able to mount to the local machine and access the remote file system locally on your machine.
http://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
With mounting, you can access the file locally with your machine resources to edit the remote files.
While SCP can work, you need to keep them in sync all the time between your local and remote.
Hope it helps.

How can I backup Google Drive into AWS Glacier?

I want to backup whatever new file or folder added to my Google Drive into AWS Glacier through a linux instance running in an EC2 instance.
I have gone through some AWS Glacier clients, but they are for uploading files from and downloading to local system.
https://www.cloudwards.net/best-backup-tools-amazon-glacier/
Rclone may able to help you. Rclone is a command line program to sync files and directories to and from
Google Drive
Amazon S3
Openstack Swift / Rackspace cloud files / Memset Memstore
Dropbox
Google Cloud Storage
Amazon Drive
Microsoft OneDrive
Hubic
Backblaze B2
Yandex Disk
SFTP
The local filesystem
https://github.com/ncw/rclone
Writing the steps (may be helpful to someone)
We need to create remotes for Google Drive and Amazon S3
I'm using Ubuntu server on AWS EC2 instance.
Download appropriate file from https://rclone.org/downloads/ - Linux ARM - 64 Bit (in my case)
Copy the downloaded file from local to server (using scp command) and extract the file. OR extract the file on local itself and copy the extracted files to the server (because I was facing problem in extracting it on server)
ssh into the ubuntu server.
Go inside the folder - rclone-v1.36-linux-amd64 (in my case)
Execute the following commands:
Copy binary file
$ sudo cp rclone /usr/bin/
$ sudo chown root:root /usr/bin/rclone
$ sudo chmod 755 /usr/bin/rclone
Install manpage
$ sudo mkdir -p /usr/local/share/man/man1
$ sudo cp rclone.1 /usr/local/share/man/man1/
$ sudo mandb
Run rclone config to setup. See rclone config docs for more details.
$ rclone config
After executing rcolne config command, choose the number/alphabet of option you want to select. Once reached to Use auto config? part, enter N (as we are working on remote server)
Paste the link you got in local browser, copy the verification code and enter the the code in the terminal.
Confirm, by entering y
Enter n to create another remote for Amazon S3, and repeat the same procedure.
Use the following links for various rclone commands and options:
https://rclone.org/docs/
https://linoxide.com/file-system/configure-rclone-linux-sync-cloud/

SFTP Directory Shortcut \ Bridge

I followed this tutorial to create a SFTP group and to limit users within that group just to specific folders.
So I ended up with:
/home/my-user/docs
/home/my-user/public_html
... and I want that public_html folder to be basically a shortcut or something similar so that when users upload files, these files will end up here:
/var/www/html/my-folder/
How can I do that?
Note, I'm not Linux script expert, so I would appreciate step by step explanation using terminal commands .

Failing make cloud-config.yml file in coreOS

There was not any cloud-config file in my coreOS, so I made one myself as below:
#cloud-config
hostname: coreos
ssh_authorized_keys:
-ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAgU0+1JMi9jzAiHSTu9GL4eNX0KzP5E5lN/0dczRcLF+uX4NSO9DCUUIlkGDml70aXrIHhawfR/TSz1YEkJeZDwWyRKgNeqTGXax1HncLF9kHaWxn7At34qmfWdu54zvtfhZVOV2FKWMC0A8hizkFY+LPV8rkM1Hjoik2f8FZ491ucy8Lygrtd0ZWDPBp/EyqG90JwHF6lEZanhq/2vVPTJdJtLelpdr0Ouvw132r3ex7tm76nj+T10DOsGntNfNr/VD8Z1UD2sRxG9JgWgVHVjYzfy5ISCQwvbYG6DZG+e33SxZb5Ch9B5h8vCaRgsA1DX1K+rdp5fxCF5h1VkxaMQ== rsa-key-20151214
But it did not work when I tried to log in with putty through ssh key, also got error when logged in
" server refused our key "
and
" Failed Units: 1
system-cloudinit#usr-share-oem-cloud\x2dconfig.yml.service "
Well I am confused about this cloud-config.
What should I do to make right one to work?
If anyone knows about coreOS, Please help me
The answer to your question depends on what type of CoreOS system you are running.
Also, from your question it isn't clear how you tried to set your system's cloud config.
If this is a bare metal install (you used the coreos-install tool to install to a physical system), you should have a cloud config file at /var/lib/coreos-install/user_data. user_data is your cloud config file here. It should have been created from the cloud-config.yml that was provided when running coreos-install.
For most of the other types of systems (CDROM/USB, PXE, vmWare, etc.) the cloud config file is usually part of the environment and read during every boot.
You can find the locations of the cloud config file for other CoreOS system types here.
If you didn't provide a cloud config during install, or in the environment you can use the following command to load a custom cloud config file:
sudo coreos-cloudinit --from-file=/home/core/cloud-config.yaml
of course you need to have command line access to do that. just in case you don't have console access yet, you can use the coreos.autologin Kernel parameter when you boot to skip login on the console.
You can validate your cloud-config at coreos.com/validate. I'm not sure that's what is failing here, but check it out if you keep running into issues.
Validater suggests this works? but maybe it's 3 parts?
#cloud-config
hostname: coreos
ssh_authorized_keys: ["-ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAgU0+1JMi9jzAiHSTu9GL4eNX0KzP5E5lN/0dczRcLF+uX4NSO9DCUUIlkGDml70aXrIHhawfR/TSz1YEkJeZDwWyRKgNeqTGXax1HncLF9kHaWxn7At34qmfWdu54zvtfhZVOV2FKWMC0A8hizkFY+LPV8rkM1Hjoik2f8FZ491ucy8Lygrtd0ZWDPBp/EyqG90JwHF6lEZanhq/2vVPTJdJtLelpdr0Ouvw132r3ex7tm76nj+T10DOsGntNfNr/VD8Z1UD2sRxG9JgWgVHVjYzfy5ISCQwvbYG6DZG+e33SxZb5Ch9B5h8vCaRgsA1DX1K+rdp5fxCF5h1VkxaMQ== rsa-key-20151214"]

Pulling remote 'client uploaded assets' into a local website development environment

This is an issue we come up against again and again, how to get live website assets uploaded by a client into the local development environment. The options are:
Download all the things (can be a lengthy process and has to be repeated often)
Write some insane script that automates #1
Some uber clever thing which maps a local folder to a remote URL
I would really love to know how to achieve #3, have some kind of alias/folder in my local environment which is ignored by Git but means that when testing changes locally I will see client uploaded assets where they should be rather than broken images (and/or other things).
I do wonder if this might be possible using Panic Transmit and the 'Transmit disk' feature.
Update
Ok thus far I have managed to get a remote server folder mapped to my local machine as a drive (of sorts) using the 'Web Disk' option in cPanel and then the 'Connect to server' option in OS X.
However although I can then browse the folder contents in a safe read only fashion on my local machine when I alias that drive to a folder in /sites Apache just prompts me to download the alias file rather that follow it as a symlink might... :/
KISS: I'd go with #2.
I usally put a small script like update_assets.sh in the project's folder which uses rsync to download the files:
rsync --recursive --stats --progress -aze user#site.net:~/path/to/remote/files local/path
I wouldn't call that insane :) I prefer to have all the files locally so that I can work with them when I'm offline or on a slow connection.
rsync is quite fast and maybe you also want to check out the --delete flag to delete local files when they were removed from remote.

Resources