create directory as an ssh link - linux

I'd like to create a directory that links to external site via ssh.
So that when I cd to /var/remote/dev01 it will actually cd to a folder on a remote site.
So I'm staying on my current terminal and can copy files from any other dir to this /var/remote/dev01 dir and it will copy the files over to the remote host.
Is this even doable?

Yes, this is possible. Take a look at SSHFS. It lets you mount a remote filesystem over ssh, and treat it as a local mountpoint, for standard filesystem operations.
Here's a nice walkthrough to get you started.

Related

Can not copy files From Azure VM to local Windows

I want to copy file from Azure Linux VM to local Windows PC. Actually I remember, I could do this perfectly with the same command but now when I run the cmd it shows message as 100% done but when I go to tmp directory, I dont see the file there.
Here is the cmd I give on Linux VM:
scp -r mlopenedx#138.91.116.170:/edx/var/log/tracking/tracking.log /tmp/
And this is output I get:
tracking.log 100% 70KB 70.0KB/s 00:00
But when I see tmp folder I dont see the file.Can any on suggest me the answer.
I have tried things like: giving Home folder ~/ instead of /tmp/.
Also tried below cmd:
sudo scp -i ~/.ssh/id_rsa mlopenedx#MillionEdx:/edx/var/log/tracking/tracking.log /tmp/
The easiest way to do this is to run pscp from windows like this:
pscp mlopenedx#LINUXVMIP:/edx/var/log/tracking/tracking.log c:/someExistingFolder/tracking.log
to have pscp command you need to install PuTTY.
your command looks wrong as one of the paths needs to be Windows valid path C:/Folder/Folder/File.ext. If you are executing that command from Linux VM and 138.91.116.170 is your Linux vm IP address than you are copping files locally - you can try finding your log file on that linux in \tmp\ folder. In order for that to work from remote Linux to local Windows you would need public IP for your windows or some sort of tunnel that would allow this connection.
Also you are adding -r recursive copy and you are pointing to file.

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

Blank SSHFS mount folder

I am attempting to mount a remote directory located on my web server to a directory in my xUbuntu installedation hosted in a VirtualBox.
I'm using the following command syntax:
sshfs root#*.*.*.*:/var/www Desktop/RemoteMount
Using the file manager, I navigate to the Desktop/RemoteMount directory but find it entirely blank. The SSHFS command above executed with no indication of an error.
Completely by chance, I use the terminal to long list the contents of the Desktop/RemoteMount directory and it shows all the data I was expecting to see in the file manager.
Can anyone tell me why the file manager does not show my remotely mounted data and how I might fix it?
Thanks.
you are missing local mountpoint.
sshfs -o idmap=user mika#192.168.1.2:/home/mika/remotepoint /home/mika/localmountpoint.
And You need to have localmount folder exist.
thanks Mika

Export SVN repository over FTP to a remote server

I'm using following command to export my repository to a local path:
svn export --force svn://localhost/repo_name /share/Web/projects/project_name
Is there any, quite easy (Linux newbie here) way to do the same over FTP protocol, to export repository to a remote server?
Last parameter of svn export AFAIK have to be a local path and AFAIK this command does not support giving paths in form of URLs, like for example:
ftp://user:pass#server:path/
So, I thing there should be some script hired here to do the job.
I have asked some people about that, and was advised that the easiest way is to export repository to a local path, transfer it to an FTP server and then purge local path. Unfortunately I failed after first step (extract to local path! :) So, the support question is, if it can be done on-the-fly, or really have to be split into two steps: export + ftp transfer?
Someone also advised me to setup local SVN client on remote server and do simple checkout / update from my repository. But this is solution possible only if everything else fails. As I want to extract pure repository structure, without SVN files, which I would get, when go this way.
BTW: I'm using QNAP TS-210, a simple NAS device, with very limited Linux on board. So, many command-line commands as good as GUI are not available to me.
EDIT: This is second question in my "chain". Even, if you help me to succeed here, I won't be able to automate this job (as I'm willing to) without your help in question "SVN: Force svn daemon to run under different user". Can someone also take a look there, please? Thank you!
Well, if you're using Linux, you should be able to mount an ftpfs. I believe there was a module in the Linux kernel for this. Then I think you would also need FUSE.
Basically, if you can mount an ftpfs, you can write your svn export directly to the mounted folder.
not sure about FTP, but SSH would be a lot easier, and should have better compression. An example of sending your repo over SSH may look like:
svnadmin dump /path/to/repository |ssh -C username#servername 'svnadmin -q load /path/to/repository/on/server'
URL i found that info was on Martin Ankerl's site
[update]
based on the comment from #trejder on the question, to do an export over ssh, my recomendation would be as follows:
svn export to a folder locally, then use the following command:
cd && tar czv src | ssh example.com 'tar xz'
where src is the folder you exported to, and example.com is the server.
this will take the files in the source folder, tar and gzip them and send them over ssh, then on ssh, extract the files directly to the machine....
I wrote this a while back - maybe it would be of some use here: exup

Rsync module path needs to be a home directory

I'm trying to use rsync to backup windows servers to an rsync server. I'm having problems with rsync on the linux side though, it doesn't like symlinks.
Currently I'm trying to use the module path of ~/backup, but rsync says that the chroot failed. I looked up what to do and saw that I needed to add the option use chroot = no and munge symlinks = no. That fixed the #ERROR: chroot failed but now it's telling me #ERROR: chdir failed and the log files say that there is no ~/backup directory. I know the user I'm authenticating with has a backup folder in his directory.
How can I fix this?
For reference I'm using a .NET port of rsync called NetSync and tunneling it over a port forwarded SSH connection generated with granados.
IIRC, tilde (~) expansion is done by the shell. chdir() doesn't handle this.
Try an absolute path. If you don't like that, then try using "backup" (or ./backup) on the assumption that after login, the current directory will be set to the user's home directory.
As far as i understand it, it seems that your path should be /home, and that it is up to your user to move into his own directory.
There is another solution that involves declaring a module for each user, but that seems overly complex for the purpose.
This is a bit too late, but chroot fails if the directory does not exist. Did you check if ~/backup has actually been created?

Resources