Blank SSHFS mount folder - linux

I am attempting to mount a remote directory located on my web server to a directory in my xUbuntu installedation hosted in a VirtualBox.
I'm using the following command syntax:
sshfs root#*.*.*.*:/var/www Desktop/RemoteMount
Using the file manager, I navigate to the Desktop/RemoteMount directory but find it entirely blank. The SSHFS command above executed with no indication of an error.
Completely by chance, I use the terminal to long list the contents of the Desktop/RemoteMount directory and it shows all the data I was expecting to see in the file manager.
Can anyone tell me why the file manager does not show my remotely mounted data and how I might fix it?
Thanks.

you are missing local mountpoint.
sshfs -o idmap=user mika#192.168.1.2:/home/mika/remotepoint /home/mika/localmountpoint.
And You need to have localmount folder exist.
thanks Mika

Related

Moving files in an Azure SMB File Share from linux

I have a few Azure SMB File Shares mounted on a linux VM.
In one of those file shares I have two folders, one called download and another called loaded.
Files get dropped in download, they get processed and moved into loaded. But sometimes we have to move the files from loaded to download again from our laptops (running Windows). And when we do this, the files can't move back to loaded.
Essentially:
I mount file-share
I run mv /mnt/file-share/download/file.txt /mnt/file-share/loaded/file.txt
I drag and drop file.txt from loaded to download from my laptop
Up to here everything works. But when I try to run mv /mnt/file-share/download/file.txt /mnt/file-share/loaded/file.txt again, it returns:
mv: /mnt/file-share/download/file.txt /mnt/file-share/loaded/file.txt are the same file
If I now umount and mount again file-share it works. So this makes me thing that it's a caching issue.
So I tried mounting with cache=none but it still does the same thing.
Any sugestions?
Thank you!
Using the noserverino option while mounting the share fixed the issue.

Is there a way I can bind mount dot files from a windows host to a linux container?

I am attempting to mount a file from my Windows (Host) to my Linux (Container). When I mount a single file with a standard extension, everything seems to work fine. However, when I attempt to mount a single file that is a dot-file, it does not work.
//This does not work
type=bind,source=${env:USERPROFILE}\\.sample,target=/home/.sample,consistency=cached
// This does work
type=bind,source=${env:USERPROFILE}\\sample.txt,target=/home/sample.txt,consistency=cached
I'm not sure how to specify that the file is a dot file. I did notice that if the file did not exist, a folder named .sample was created on my Windows (Host) machine, but that same folder was not created on Linux (Container).
Are you sure they aren't there? Linux treats dotfiles as hidden files, so they aren't visible by just doing an ls command.
You can use ls -A which should show you the hidden dotfiles.

Can not copy files From Azure VM to local Windows

I want to copy file from Azure Linux VM to local Windows PC. Actually I remember, I could do this perfectly with the same command but now when I run the cmd it shows message as 100% done but when I go to tmp directory, I dont see the file there.
Here is the cmd I give on Linux VM:
scp -r mlopenedx#138.91.116.170:/edx/var/log/tracking/tracking.log /tmp/
And this is output I get:
tracking.log 100% 70KB 70.0KB/s 00:00
But when I see tmp folder I dont see the file.Can any on suggest me the answer.
I have tried things like: giving Home folder ~/ instead of /tmp/.
Also tried below cmd:
sudo scp -i ~/.ssh/id_rsa mlopenedx#MillionEdx:/edx/var/log/tracking/tracking.log /tmp/
The easiest way to do this is to run pscp from windows like this:
pscp mlopenedx#LINUXVMIP:/edx/var/log/tracking/tracking.log c:/someExistingFolder/tracking.log
to have pscp command you need to install PuTTY.
your command looks wrong as one of the paths needs to be Windows valid path C:/Folder/Folder/File.ext. If you are executing that command from Linux VM and 138.91.116.170 is your Linux vm IP address than you are copping files locally - you can try finding your log file on that linux in \tmp\ folder. In order for that to work from remote Linux to local Windows you would need public IP for your windows or some sort of tunnel that would allow this connection.
Also you are adding -r recursive copy and you are pointing to file.

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

create directory as an ssh link

I'd like to create a directory that links to external site via ssh.
So that when I cd to /var/remote/dev01 it will actually cd to a folder on a remote site.
So I'm staying on my current terminal and can copy files from any other dir to this /var/remote/dev01 dir and it will copy the files over to the remote host.
Is this even doable?
Yes, this is possible. Take a look at SSHFS. It lets you mount a remote filesystem over ssh, and treat it as a local mountpoint, for standard filesystem operations.
Here's a nice walkthrough to get you started.

Resources