Using SCP to copy files from a remote server to external hard drive - cygwin

I'm having trouble figuring out how to copy a file from a remote server to an external hard drive connected to my local machine. I have cygwin installed in my c drive and I know how to scp files into any directory using:
scp User#remoteserver:/path/to/file.txt /cygdrive/c/path/to/destination
However, I'm not sure how to change the destination to the external hard drive, which I have as the d drive.

Related

Can not copy files From Azure VM to local Windows

I want to copy file from Azure Linux VM to local Windows PC. Actually I remember, I could do this perfectly with the same command but now when I run the cmd it shows message as 100% done but when I go to tmp directory, I dont see the file there.
Here is the cmd I give on Linux VM:
scp -r mlopenedx#138.91.116.170:/edx/var/log/tracking/tracking.log /tmp/
And this is output I get:
tracking.log 100% 70KB 70.0KB/s 00:00
But when I see tmp folder I dont see the file.Can any on suggest me the answer.
I have tried things like: giving Home folder ~/ instead of /tmp/.
Also tried below cmd:
sudo scp -i ~/.ssh/id_rsa mlopenedx#MillionEdx:/edx/var/log/tracking/tracking.log /tmp/
The easiest way to do this is to run pscp from windows like this:
pscp mlopenedx#LINUXVMIP:/edx/var/log/tracking/tracking.log c:/someExistingFolder/tracking.log
to have pscp command you need to install PuTTY.
your command looks wrong as one of the paths needs to be Windows valid path C:/Folder/Folder/File.ext. If you are executing that command from Linux VM and 138.91.116.170 is your Linux vm IP address than you are copping files locally - you can try finding your log file on that linux in \tmp\ folder. In order for that to work from remote Linux to local Windows you would need public IP for your windows or some sort of tunnel that would allow this connection.
Also you are adding -r recursive copy and you are pointing to file.

How do I extract a remote tarball on to local machine

I have a remote repository of all required tarballs. I need to run a command on local linux machine to fetch this remote tarball and extract locally.
I know how to extract a local tarball on a remote machine but here, how do i achieve the the opposite using ssh?
I do not want to mount any remote directory nor want traces of tar file on local machine. Is this possible?
Sure, you can use ssh to get the file content and pipe it straight to tar to extract it on the local machine without actually putting the tarball onto the local filesystem:
ssh remotehost cat /path/to/foo.tar.gz | tar xzf -
Be sure you're in the desired output directory first.

pscp between linux and windows

I am trying to scp file(s) from window to linux.
user is windows system, we are executing pscp from linux command
Command used :
pscp user#ip:source dest
It shows :
unable to identify source: permission denied
If I use it in this way,
pscp source user#ip:dest
It works fine and copy the files to windows.
Am I using correct format?
We need to copy from remote windows, but the commands needs to be invoked from local linux system.
Here is an example of how I copy a file from my windows machine to my linux machine
C:\Users...\Downloads>pscp -i "DEV_IRM.ppk"
product-sp-4.2.0-rc2.tar.gz
prvclouduser#10.149.137.26:/home/prvclouduser/sp420
Make sure that you are in the directory where the file you want to copy is located. In my case 'Downloads' directory.
Private key for access: DEV_IRM.ppk
My linux server: prvclouduser#10.149.137.26
The landing directory in linux: /home/prvclouduser/sp420 (make sure this exists; do pwd to confirm full path)

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

create directory as an ssh link

I'd like to create a directory that links to external site via ssh.
So that when I cd to /var/remote/dev01 it will actually cd to a folder on a remote site.
So I'm staying on my current terminal and can copy files from any other dir to this /var/remote/dev01 dir and it will copy the files over to the remote host.
Is this even doable?
Yes, this is possible. Take a look at SSHFS. It lets you mount a remote filesystem over ssh, and treat it as a local mountpoint, for standard filesystem operations.
Here's a nice walkthrough to get you started.

Resources